RE: FreeBSD bind performance in FreeBSD 7

2008-03-14 Thread Ted Mittelstaedt


 -Original Message-
 From: Peter Schuller [mailto:[EMAIL PROTECTED]
 Sent: Monday, March 10, 2008 2:02 AM
 To: freebsd-questions@freebsd.org
 Cc: Ted Mittelstaedt; Chris; Adrian Chadd
 Subject: Re: FreeBSD bind performance in FreeBSD 7


  The people complaining about hardware compatibility need
  to pull their heads out.  If they are buying brand new systems
  they are utter fools if they don't check out in advance
  what works and what doesen't.  It's not like there's a
  shortage of experienced people on this list who could
  tell them what to buy.  And if after the fact they find out
  their shiny new PC won't run FreeBSD - then they take it
  back to the retailer and exchange it for a different model.
  Why is this so difficult?

 The difficulty is not in checking out hardware before hand, the
 problem is
 FINDING hardware that satisfies your requirements. Just because I
 know that
 NIC so-and-so is recommended, it does not mean that I can find a complete
 server that:

 * Is within the budget.
 * Whose NIC is recommended for use in FreeBSD.
 * Whose disk/raid controller is recommended for use in FreeBSD
   - Including proper handling of write caching, cache flushing, etc
 * Is being sold in a fashion that is acceptable with respect to hardware
 support / replacement parts.
 * Otherwise is known to work with well FreeBSD.

 If you are a large company buying 200 servers I'm sure it's not a
 problem to
 get sample servers to try things on, or go for more expensive
 options just
 because of perceived FreeBSD compatibility.

 If you're a poor sod trying to get *one* machine for personal or
 small-company
 use and you want something that works and is stable, especially
 if you want
 it rack mountable, it is NOT necessarily trivial. Part of it is
 the problem
 of finding a solution that meets the requirements, and parts of
 it is about
 figuring out whether a particular solution DOES meet the requirements.

 For example, once your cheaper Dell server has arrived and you
 suddenly notice
 that it's delivered without a BBU, and clearly has write caching
 turned on
 based on performance, try asking (remember, this is a lonely
 customer with a
 single service) Dell hardware support whether that particular
 controller will
 honor cache flush requests right down to the constituent
 drives... I did, and
 eventually got a response after 1-2 weeks. But the response was
 such that I
 could not feel confident that the question was accurately
 forwarded to the
 right individual.


That is exactly why computer consulting firms (like the one that
partly owns the ISP I work for) exist.  There's a list of them
on the FreeBSD website that sell hardware.

For the poor sod trying to get 1 machine, he has a choice:

pay a trivial couple hundred bucks to a consulting firm that
sells PCs to small businesses to supply the system he needs for
his business

do it himself and deal with all of the research beforehand,
and all the post-support hassles with Dell or HP or whatever.


You see, the problem is that the small business/home office
types see these consumer-adverts in the backs of the newspaper
for a $299.99 Dell, and they immediately assume a computer is
a computer is a computer, and that they shouldn't have to pay
a consultant more than $50 to provide everything with all the
trimmings to them - because after all a consultant is going to
do is just pick up the phone and place the order, eh?

(frankly, the FBSD folk have it easy - this attitude is 10 times
worse in the Mickeysoft consulting business)

For the home user, his choice is either spending the $300 and
crossing his fingers and hope the thing works at all, or actually
approaching it from a professional point of view and doing what
the businesses are supposed to be doing - that is, hiring a
consultant that knows what they are doing, or spending the
same amount of time and money that a knowledgeable consultant
spent.

You think I got my knowledge for free?  I have a basement
full of old computer hardware I bought over the years while
I learned that says otherwise. Care for an $80 CGA card?
Now do you see why consultants go crazy with that your
knowledge ain't worth anything attitude?

As long as the FreeBSD community cops the attitude that FBSD is
only for do-it-yourselfers, it's going to be largely ignored by
most of the business community.

In any case, I can count the number of people who have posted
I'm planning on getting a system that is going to run FreeBSD
what should I get questions on the mailing list in the last
year on the fingers of 1 hand, I think, so I really tend to
discount this argument.

I'll repeat, the vast majority of people complaining about
hardware problems with FreeBSD are the folks who bought first,
THEN when something didn't work, came running to the mailing
list.  And the vast majority of them claim they cannot take
it back because it's past the UCC-mandated 30-day return
timeperiod, so returning the stuff isn't an option

Re: FreeBSD bind performance in FreeBSD 7

2008-03-10 Thread Peter Schuller
 The people complaining about hardware compatibility need
 to pull their heads out.  If they are buying brand new systems
 they are utter fools if they don't check out in advance
 what works and what doesen't.  It's not like there's a
 shortage of experienced people on this list who could
 tell them what to buy.  And if after the fact they find out
 their shiny new PC won't run FreeBSD - then they take it
 back to the retailer and exchange it for a different model.
 Why is this so difficult?

The difficulty is not in checking out hardware before hand, the problem is 
FINDING hardware that satisfies your requirements. Just because I know that 
NIC so-and-so is recommended, it does not mean that I can find a complete 
server that:

* Is within the budget.
* Whose NIC is recommended for use in FreeBSD.
* Whose disk/raid controller is recommended for use in FreeBSD
  - Including proper handling of write caching, cache flushing, etc
* Is being sold in a fashion that is acceptable with respect to hardware 
support / replacement parts.
* Otherwise is known to work with well FreeBSD.

If you are a large company buying 200 servers I'm sure it's not a problem to 
get sample servers to try things on, or go for more expensive options just 
because of perceived FreeBSD compatibility. 

If you're a poor sod trying to get *one* machine for personal or small-company 
use and you want something that works and is stable, especially if you want 
it rack mountable, it is NOT necessarily trivial. Part of it is the problem 
of finding a solution that meets the requirements, and parts of it is about 
figuring out whether a particular solution DOES meet the requirements.

For example, once your cheaper Dell server has arrived and you suddenly notice 
that it's delivered without a BBU, and clearly has write caching turned on 
based on performance, try asking (remember, this is a lonely customer with a 
single service) Dell hardware support whether that particular controller will 
honor cache flush requests right down to the constituent drives... I did, and 
eventually got a response after 1-2 weeks. But the response was such that I 
could not feel confident that the question was accurately forwarded to the 
right individual.

-- 
/ Peter Schuller

PGP userID: 0xE9758B7D or 'Peter Schuller [EMAIL PROTECTED]'
Key retrieval: Send an E-Mail to [EMAIL PROTECTED]
E-Mail: [EMAIL PROTECTED] Web: http://www.scode.org



signature.asc
Description: This is a digitally signed message part.


RE: FreeBSD bind performance in FreeBSD 7

2008-03-08 Thread Ted Mittelstaedt


 -Original Message-
 From: Simon Dircks [mailto:[EMAIL PROTECTED]
 Sent: Friday, March 07, 2008 8:27 AM
 To: Ted Mittelstaedt
 Cc: Peter Losher; [EMAIL PROTECTED];
 freebsd-questions@freebsd.org
 Subject: Re: FreeBSD bind performance in FreeBSD 7
 
 
 Ted Mittelstaedt wrote:

  -Original Message-
  From: Peter Losher [mailto:[EMAIL PROTECTED]
  Sent: Monday, March 03, 2008 10:18 PM
  To: Ted Mittelstaedt
  Cc: [EMAIL PROTECTED]; freebsd-questions@freebsd.org
  Subject: Re: FreeBSD bind performance in FreeBSD 7
 
 
  Yeah, ISC just hates FreeBSD... rolls eyes
  
 
  This final report here:
 
  ftp://ftp.isc.org/isc/dns_perf/ISC-TN-2008-1.pdf
 
  is LIGHTYEARS different than the draft here:
 
  http://new.isc.org/proj/dnsperf/OStest.html
 
 
  The draft contains the conclusion:
 

 
 You change your underpants once a year?
 

I just throw them against the wall - if they stick, it's time
for a change.

Seriously if you think HP only changes it's product lineup
once a year you haven't bought much HP.  It's a very common
occurance for us to make up a quote for a new HP server then
by the time the customer signs off on it and we are able
to go order the server, we find it on the constrained list
because they are replacing it with yet another model change.

Ted
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: FreeBSD bind performance in FreeBSD 7

2008-03-08 Thread Al Plant

Ted Mittelstaedt wrote:
  

-Original Message-
From: Simon Dircks [mailto:[EMAIL PROTECTED]
Sent: Friday, March 07, 2008 8:27 AM
To: Ted Mittelstaedt
Cc: Peter Losher; [EMAIL PROTECTED];
freebsd-questions@freebsd.org
Subject: Re: FreeBSD bind performance in FreeBSD 7


Ted Mittelstaedt wrote:

  
  

-Original Message-
From: Peter Losher [mailto:[EMAIL PROTECTED]
Sent: Monday, March 03, 2008 10:18 PM
To: Ted Mittelstaedt
Cc: [EMAIL PROTECTED]; freebsd-questions@freebsd.org
Subject: Re: FreeBSD bind performance in FreeBSD 7


Yeah, ISC just hates FreeBSD... rolls eyes



This final report here:

ftp://ftp.isc.org/isc/dns_perf/ISC-TN-2008-1.pdf

is LIGHTYEARS different than the draft here:

http://new.isc.org/proj/dnsperf/OStest.html


The draft contains the conclusion:

  
  

You change your underpants once a year?




I just throw them against the wall - if they stick, it's time
for a change.

Seriously if you think HP only changes it's product lineup
once a year you haven't bought much HP.  It's a very common
occurance for us to make up a quote for a new HP server then
by the time the customer signs off on it and we are able
to go order the server, we find it on the constrained list
because they are replacing it with yet another model change.

Ted
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]

  

Aloha Ted,

Dell sends  many of its products in a single purchase  out with nic 
cards and other components that are not the same in every box too.


~Al Plant - Honolulu, Hawaii -  Phone:  808-284-2740
 + http://hawaiidakine.com + http://freebsdinfo.org + 
 + http://aloha50.net   - Supporting - FreeBSD 6.* - 7.* - 8.* +

  email: [EMAIL PROTECTED] 
All that's really worth doing is what we do for others.- Lewis Carrol


___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


RE: FreeBSD bind performance in FreeBSD 7

2008-03-07 Thread Ted Mittelstaedt


 -Original Message-
 From: Peter Losher [mailto:[EMAIL PROTECTED]
 Sent: Monday, March 03, 2008 10:18 PM
 To: Ted Mittelstaedt
 Cc: [EMAIL PROTECTED]; freebsd-questions@freebsd.org
 Subject: Re: FreeBSD bind performance in FreeBSD 7


 Yeah, ISC just hates FreeBSD... rolls eyes

This final report here:

ftp://ftp.isc.org/isc/dns_perf/ISC-TN-2008-1.pdf

is LIGHTYEARS different than the draft here:

http://new.isc.org/proj/dnsperf/OStest.html


The draft contains the conclusion:

...We will use Linux Gentoo 2.6.20.7 for further production testing. We
brought these numbers to the attention of the FreeBSD development team, and
will re-test when FreeBSD 7.1 is released...

This is completely missing in the final.  Added is a bunch of
praise of bind on commodity hardware.  And also added is the line:

...All computers in the testbed were loaded with identical copies of
FreeBSD 7.0-RELEASE...

which is missing in the draft.

So in other words, it certainly appears that the final is
180 degrees opposite of it's discussion of FreeBSD.  The
draft appears to suggest to avoid it - the final appears to
suggest to embrace it.

So what, exactly may I ask, were you expecting after
writing that draft?  Everyone here to be happy?

It almost seems to me like the draft was a trial balloon
floated to get the FreeBSD developers to jump in and
do some coding for you at the last minute.

But, I'll say no more about that and turn towards the
report - because it has some significant problems.

I'll start with the beginning:

...We have been particularly interested in the performance of DNSSEC
mechanisms within the DNS protocols and on the use of BIND as a production
server for major zones...

OK, fine and good.  However, the conclusion is rather
different:

...Commodity hardware with BIND 9 software is fast enough by a wide margin
to run large production name service...

What is going on here?  This project started out as
purely observational - merely interested in BIND performance -
and ended up being a proof for the hypothesis that BIND
is good enough to run large nameservers on commodity hardware.

In short, the report is moving from an objective view to a
subjective goal of proving BIND is kick-ass.

It is interesting how the original draft conclusion IS NOT subjective
with regards to BIND (it is with regards to FreeBSD of course)
and uses the phrase further production testing implying
that BIND is still under development, while the final report
uses the language:

...open-source software and commodity hardware are up to the task of
providing full-scale production name...

which definitely implies that BIND is done and ready for
production.

Another thing of interest concernes the OS.

Microsoft Windows 2003 server is included in
the first breaking point test.  It is absent from the other
tests.  And the version chosen is old, old, it is NOT
even Server 2003 R2, nor the RC of Server 2008 which is
available.

Why were the Windows test results even left in the
published report at all?  What purpose do they serve
other than as a feel-good bash Windows.  If you really
were interested in the results of testing, you would
have wanted to know how BIND did under Windows for the
other tests.  But, as I pointed out, by the time the
later tests were run the goal has stopped being the
pure objective observational goal, and become the
subjective prove BIND is the best goal.  And as the
Windows results for the breaking test were so low, it
was an embarassment to keep bothering with it, so it
was dropped.

The report also suffers from NOT listing out the
components of the HP servers and instead offering a
link to HP.  Yeah, how long is that link going to be
valid?  HP changes it's website and changes it's product
line up as often as I change my underpants - a year from
now, that product will be gone and a new reader will have
a snowball's chance in Hell of getting the actual server
specs, and I mean the chipsets in use for the disk controller,
nic card, video, etc.  You know, the stuff that actually
-affects- the performance of different operating systems.

But the biggest hole is the report conclusion and this
shift from objective, to subjective, reporting.  The conclusion
claims BIND is great on commodity hardware but what it
ONLY has proven is that BIND is great on this one specific
hardware platform running a couple specific operating systems.

If you really wanted to merely objectively observe BIND on
commodity hardware you should have had your testers stay out of the
setup of the OS and platform.  You should have called
up the developers of the various operating systems you
were going to use - Microsoft among them - and told them
to each send in a group that would build a server to their
spec.  You should have merely set a maximum limit that the
server could cost that was in line with commodity server
hardware costs - something like $2K and it had to be name-brand,
for example - and let all of the vested interest groups do

Re: FreeBSD bind performance in FreeBSD 7

2008-03-07 Thread Simon Dircks

Ted Mittelstaedt wrote:
  

-Original Message-
From: Peter Losher [mailto:[EMAIL PROTECTED]
Sent: Monday, March 03, 2008 10:18 PM
To: Ted Mittelstaedt
Cc: [EMAIL PROTECTED]; freebsd-questions@freebsd.org
Subject: Re: FreeBSD bind performance in FreeBSD 7


Yeah, ISC just hates FreeBSD... rolls eyes



This final report here:

ftp://ftp.isc.org/isc/dns_perf/ISC-TN-2008-1.pdf

is LIGHTYEARS different than the draft here:

http://new.isc.org/proj/dnsperf/OStest.html


The draft contains the conclusion:

  


You change your underpants once a year?
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: FreeBSD bind performance in FreeBSD 7

2008-03-05 Thread Mark Linimon
  * I am trying to understand what is different about the ISC
  configuration but have not yet found the cause.

 It's called Anti-FreeBSD bias.  You won't find anything.

If this is true, please try to explain to me the following:

 - ISC hosts 5 Netra 1s that comprise most of our sparc64 package build
   cluster.  They are allowing us to add 4 more next week.

 - ISC hosts 3 amd64 machines for our amd64 package build cluster.

 - ISC used to host 3 alpha machines, until we retired them.

 - ISC hosts ftp4.freebsd.org, which is one of the 2 machines that the
   address ftp.freebsd.org rotors to.  This is an extremely high-
   bandwidth machine.

 - ISC hosts several other development machines (I am not aware of
   all the exact ones).

All of this has been in place for years, with the space, power, and
cooling all donated for free.

Kris and others have been doing a tremendous amount of work over the
past 2 years to identify and fix performance problems in FreeBSD.
There have been literally hundreds of regression tests run, resulting
in a large number of cycles of commit/test.  Sometimes the commits do
what we expect, sometimes no.  Lather, rinse, repeat.  The difference
in performance between 6.3R and 7.0R is primarily due to all this
effort.  ISC's re-tests seems to confirm the improvements.

The current speculation is that the difference in the measurements we're
seeing could well be due to our drivers.  If so, let's identify and fix
the problems.  Otherwise, let's try to understand whether there are any
meaningful differences in the way the tests are being run.

Casting aspersions on someone's methodology or motives just because
you (or I) don't like the results is merely nonsense.

AFAICT ISC's business model primarily consists of them selling the
ability of bind to perform under load.  That's the variable they have
to optimize for.  Let's hope that we are part of helping them to do
just that.

mcl
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: FreeBSD bind performance in FreeBSD 7

2008-03-04 Thread Chris
On 29/02/2008, Ted Mittelstaedt [EMAIL PROTECTED] wrote:




 Device drivers and hardware are a cooperative effort.  The ideal
 is a well-written device driver and well-designed hardware.
 Unfortunately the reality of it appears to be that it costs
 a LOT more money to hire good silicon designers than it costs
 to hire good programmers - so a depressing amount of computer
 hardware out there is very poor hardware, but the hardware's
 shortcomings are made up by almost Herculean efforts of the
 software developers.

 I should have thought the invention of the Winmodem (windows-only
 modem) would have made this obvious to the general public
 years ago.

 Unfortunately, the hardware vendors make a lot of effort to
 conceal the crappiness of their designs and most customers
 just care if the device works, they don't care if the only
 way the device can work is if 60% of their system's CPU is
 tied up servicing a device driver that is making up for
 hardware shortcomings, so it is still rather difficult
 for a customer to become informed about what is good and
 what isn't - other than trial and error.

 I hardly think that the example I cited - the 3com 3c905 PCI
 network adapter - is an example of poor support in FreeBSD.
 The FreeBSD driver for the 509 worked perfectly well when
 the 309 used a Lucent-built ASIC.  When 3com decided to
 save 50 cents a card by switching to Broadcom for the
 ASIC manufacturing, the FreeBSD driver didn't work very
 well with those cards - nor did the Linux driver for that
 matter.  This clearly wasn't a driver problem it was a
 problem with Broadcom not following 3com's design specs
 properly.  3com did the only thing they could - which
 was to put a hack into the Windows driver - but of course,
 nobody bothered telling the Linux or FreeBSD community
 about it, we had to find out by dicking around with the
 driver code.

 If datacenters want to purchase poor hardware and run their
 stuff on it, that's their choice.  Just because a piece
 of hardware is mainstream doesen't mean it's good.  It
 mainly means it's inexpensive.

 Ted


Ted I never meant mainstream = good but I did mean mainstream cannot
be ignored and written off if something is mainstream it is for a
reason if the hardware was so poor then I am sure complaints would be
so high it would no longer be mainstream.  Not sure if you
understanding me I am most defenitly not saying I expect a cheap
network card to perform on par with a premium card.  I am merely
saying ideally it should perform and be as stable as it is in other
operating systems and if it isnt then look at what can be improved
rather than just saying go buy a new peice of kit.  Is freebsd a
operating system for use on premium hardware only? as that what it
feels like I am reading sometimes.

Now on the bind tests if the hardware used on both linux and freebsd
was the exact same spec hardware then blaming the hardware is invalid
as its apple vs apple.  Obviously if the linux tests were done on
superior hardware then its apple vs orange and the tests are
invalidated.

Chris
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: FreeBSD bind performance in FreeBSD 7

2008-03-03 Thread Peter Losher

Ted Mittelstaedt wrote:


My beef with the DNS tests was that ISC ran out and bought
the hardware FIRST, -then- they started testing.  This is
directly contrary to every bit of advice ever given in
the computer industry for the last 50 years - you select
the software FIRST, -then- you buy the hardware that runs it.
In short, it said far more about the incompetence of the
testers than the shortcomings of the software.


This is ridiculous.  ISC is one of the most fervent pro-FreeBSD 
companies out there (basing most of our services on the OS, and 
contributing to the FreeBSD community including the busiest CVSup  FTP 
servers and have FreeBSD committers on staff)  I will not stand back and 
watch folks on a public mailing list call us incompetent individuals 
with a anti-FreeBSD bias.


First off the final report was published last Friday at:
http://www.isc.org/pubs/tn/index.pl?tn=isc-tn-2008-1.html
(the server this is served from runs FreeBSD)

I was not one of the direct testers (we had a couple PhD's handling 
that, who I know both use FreeBSD on their personal systems), but as one 
of the folks who supported them in their work, I can tell you that the 
stats we gave the FreeBSD folks were from a test sponsored by the US 
National Science Foundation.  We were mandated to use branded HW and we 
tested several models from HP, Sun, even Iron Systems (whitebox) before 
deciding on the HP's.  The mechanism we used are all documented in the 
paper We were also asked to test DNS performance on several OS's.


The short version was 'take a standard commercial off the shelf' server 
and see how BIND performs (esp. with DNSSEC) on it.  We weren't asked to 
get hardware that was perfect for Brand X OS; that wasn't part of the remit.


(We actually use the exact same HP HW for a secondary service where we 
host a couple of thousand zones using BIND including 30+ TLD zones.  Oh 
and it runs FreeBSD)


Yes we found FreeBSD performed poorly in our initial tests. and I talked 
to several folks (including rwatson and kris) about the issue.  Kris had 
already been working on improving performance with MySQL and PgSQL and 
was interested in doing the same with BIND.  Kris went off and hacked 
away and right before EuroBSDcon last September asked us to re-run the 
tests (on the same HW) using a 7.0-CURRENT snapshot, and the end results 
are shown with a 33,000 query increase over 6.2-RELEASE, bring FreeBSD 
just behind the Linux distros we tested.  I know rwatson and kris have 
continually worked on the relevent network stack issues that cover BIND, 
and additional performance gains have been found since then, and working 
on this issue has been a true partnership between the FreeBSD developers 
and ISC.


BIND isn't perfect, we admit that, we have been constantly improving 
it's multi-CPU performance and BIND 9.4 and 9.5 are continuing in that 
effort.  We have several members of our dev team who use FreeBSD as 
their developent platform, including a FreeBSD committer.


So Ted, stop spouting this ISC is spewing anti-FreeBSD bias crap, it 
flatly isn't true...


Oh, and this email is coming to you via several of ISC FreeBSD MX 
servers which resolve the freebsd.org name via caching DNS servers 
running FreeBSD, to freebsd.org's MX server over a IPv6 tunnel supplied 
by ISC to the FreeBSD project to help FreeBSD eat their own IPv6 dog food...


Yeah, ISC just hates FreeBSD... rolls eyes

Best Wishes - Peter
--
[EMAIL PROTECTED] | ISC | OpenPGP 0xE8048D08 | The bits must flow



signature.asc
Description: OpenPGP digital signature


RE: FreeBSD bind performance in FreeBSD 7

2008-03-03 Thread Ted Mittelstaedt


 -Original Message-
 From: [EMAIL PROTECTED]
 [mailto:[EMAIL PROTECTED] Behalf Of Chris
 Sent: Friday, February 29, 2008 6:21 PM
 To: Adrian Chadd
 Cc: [EMAIL PROTECTED]; freebsd-questions@freebsd.org
 Subject: Re: FreeBSD bind performance in FreeBSD 7


 On 01/03/2008, Adrian Chadd [EMAIL PROTECTED] wrote:
  On 01/03/2008, Chris [EMAIL PROTECTED] wrote:
 
   You working round what I just said.  A nic should perform equally well
as it does in other operating systems just because its
 cheaper its not
an excuse for buggy performance.  There is also other good network
cards apart from intel pro 1000.  I am talking about stability not
performance, I expect a intel pro 1000 to outperform a
 realtek however
I expect both to be stable in terms of connectivity.  I expect a
realtek in freebsd to perform as well as a realtek in windows and
linux. :)
 
  Patches please!
 
 
  Adrian
 
 
  --
  Adrian Chadd - [EMAIL PROTECTED]
 

 Ironically the latest server I got last night has a intel pro
 1000 a rarity :)

 I am just giving feedback as when I speak to people in the datacentre
 and hosting business the biggest gripe with freebsd is hardware
 compatability, as I adore freebsd I ignore this and work round it but
 its defenitly reducing take up.

 Of course I know current re issues are getting attention which I am
 thankful for, I fully understand the time and effort required to write
 drivers patches etc. and have got no critisicms for the people who do
 this my complaint is more focused on people claiming there is no
 issues its just the hardware.


There aren't issues on hardware that is compatible.

You can't run MacOS X on an off-the-shelf PC and nobody
complains about it.  You can't run Solaris for the Sparc
on an Intel box but nobody complains about it.  FreeBSD
is not Java, it is not write once, run anywhere

If there is any problem with FreeBSD in this respect is that
it supports the poor hardware AT ALL.  Of course, we can't
do much about that - a code contributor who gets access
to CVS can put anything they want into the FreeBSD source,
and drivers are a particular problem - since few developers
are going to have duplicates of the hardware, only the
contributing developer really knows if his driver is solid
or not.

Arguably it might be better to drop support for poor hardware,
then the people who had such hardware would not be tempted
to run FreeBSD - thereby having a bad experience with it,
and blaming FreeBSD about it.

I challenge you to find an example of very high quality
hardware that has a driver in FreeBSD that has a lot of
problems.  Yet, you can find a lot of poor quality hardware
that has a FreeBSD driver with a lot of problems.  That
should tell you something - that the issue for the poor
hardware really is just the hardware

The people complaining about hardware compatibility need
to pull their heads out.  If they are buying brand new systems
they are utter fools if they don't check out in advance
what works and what doesen't.  It's not like there's a
shortage of experienced people on this list who could
tell them what to buy.  And if after the fact they find out
their shiny new PC won't run FreeBSD - then they take it
back to the retailer and exchange it for a different model.
Why is this so difficult?

My beef with the DNS tests was that ISC ran out and bought
the hardware FIRST, -then- they started testing.  This is
directly contrary to every bit of advice ever given in
the computer industry for the last 50 years - you select
the software FIRST, -then- you buy the hardware that runs it.
In short, it said far more about the incompetence of the
testers than the shortcomings of the software.

The people who have USED systems who are bitching about
FreeBSD not being compatible with their stuff need to
get over it.  OK, so they didn't get a chance to select
the hardware, they are using some retired Windows box
that won't run the new version of Windows.  So they come
here and our stuff has a problem with some hardware
part.  Well, OK fine - how does this hurt them?  Their
old computer wasn't usable for Windows anymore, now was it?
In short, their computer at that point was worthless - and
why is it OUR responsibility to make our stuff compatible
with their old computer?  How does us being incompatible
take anything away from them - their computer was scrap
anyway.  If there's a problem, well they can go
to the computer junkyard and exchange their scrap computer
for a different old scrap computer that has compatible
parts.

Ted

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: FreeBSD bind performance in FreeBSD 7

2008-03-02 Thread Robert Watson


On Sat, 1 Mar 2008, Chris wrote:

Ironically the latest server I got last night has a intel pro 1000 a rarity 
:)


I am just giving feedback as when I speak to people in the datacentre and 
hosting business the biggest gripe with freebsd is hardware compatability, 
as I adore freebsd I ignore this and work round it but its defenitly 
reducing take up.


Of course I know current re issues are getting attention which I am thankful 
for, I fully understand the time and effort required to write drivers 
patches etc. and have got no critisicms for the people who do this my 
complaint is more focused on people claiming there is no issues its just the 
hardware.


It's no coincidence that Intel cards work quite well with FreeBSD, given that 
Intel has hired developers to make FreeBSD work well on their cards.  The same 
goes for companies like Broadcom, Chelsio, Neterion, etc, who provide not only 
the necessary documentation, but also put development resources into writing 
and QAing drivers.  Put pressure on your hardware providers to do the same 
thing for their hardware -- one or two people asking may not do the trick, but 
a few large customers beating on their sales engineers can make a big 
difference, and so can larger numbers of smaller customers.


Robert N M Watson
Computer Laboratory
University of Cambridge
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: FreeBSD bind performance in FreeBSD 7

2008-03-01 Thread Christian Brueffer
On Sat, Mar 01, 2008 at 02:20:58AM +, Chris wrote:
 On 01/03/2008, Adrian Chadd [EMAIL PROTECTED] wrote:
  On 01/03/2008, Chris [EMAIL PROTECTED] wrote:
 
   You working round what I just said.  A nic should perform equally well
as it does in other operating systems just because its cheaper its not
an excuse for buggy performance.  There is also other good network
cards apart from intel pro 1000.  I am talking about stability not
performance, I expect a intel pro 1000 to outperform a realtek however
I expect both to be stable in terms of connectivity.  I expect a
realtek in freebsd to perform as well as a realtek in windows and
linux. :)
 
  Patches please!
 
 
  Adrian
 
 
  --
  Adrian Chadd - [EMAIL PROTECTED]
 
 
 Ironically the latest server I got last night has a intel pro 1000 a rarity :)
 
 I am just giving feedback as when I speak to people in the datacentre
 and hosting business the biggest gripe with freebsd is hardware
 compatability, as I adore freebsd I ignore this and work round it but
 its defenitly reducing take up.
 
 Of course I know current re issues are getting attention which I am
 thankful for, I fully understand the time and effort required to write
 drivers patches etc. and have got no critisicms for the people who do
 this my complaint is more focused on people claiming there is no
 issues its just the hardware.
 

Pyun YongHyeon has fixed a lot of driver issues (i.e. re(4), bfr(4), vr(4))
over the last few months, many are already in CURRENT or RELENG_7 (not
sure how many of them made it into 7.0-RELEASE) or posted as patches
to the current@ mailing list.

If you have problems, please see if they persist with a CURRENT snapshot.
If they do, please post to the current@ mailing list with details.

- Christian

-- 
Christian Brueffer  [EMAIL PROTECTED]   [EMAIL PROTECTED]
GPG Key: http://people.freebsd.org/~brueffer/brueffer.key.asc
GPG Fingerprint: A5C8 2099 19FF AACA F41B  B29B 6C76 178C A0ED 982D


pgpRFBaHmNWgc.pgp
Description: PGP signature


Re: FreeBSD bind performance in FreeBSD 7

2008-02-29 Thread Oliver Herold
Maybe the same hardware performes _sometimes_ better in Linux. It
differs from kernel release to kernel release and of course from distro
to distro. So 'better' is sometimes just _different_.

--Oliver

Chris [EMAIL PROTECTED] wrote:
 On 29/02/2008, Ted Mittelstaedt [EMAIL PROTECTED] wrote:
 
 
   -Original Message-
   From: [EMAIL PROTECTED]
   [mailto:[EMAIL PROTECTED] Behalf Of Sam Leffler
   Sent: Wednesday, February 27, 2008 8:54 AM
   To: Ted Mittelstaedt
   Cc: [EMAIL PROTECTED]; Kris Kennaway; Oliver Herold;
   freebsd-questions@freebsd.org
   Subject: Re: FreeBSD bind performance in FreeBSD 7
  
  
   Ted Mittelstaedt wrote:
   
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] Behalf Of Kris Kennaway
Sent: Monday, February 25, 2008 12:18 PM
To: Oliver Herold; freebsd-questions@freebsd.org;
[EMAIL PROTECTED]
Subject: Re: FreeBSD bind performance in FreeBSD 7
   
   
Oliver Herold wrote:
   
Hi,
   
I saw this bind benchmarks just some minutes ago,
   
http://new.isc.org/proj/dnsperf/OStest.html
   
is this true for FreeBSD 7 (current state: RELENG_7/7.0R) too? Or is
this something verified only for the state of development
   back in August
2007?
   
I have been trying to replicate this.  ISC have kindly given me access
to their test data but I am seeing Linux performing much slower than
FreeBSD with the same ISC workload.
   
   
   
Kris,
   
  Every couple years we go through this with ISC.  They come out with
a new version of BIND then claim that nothing other than Linux can
run it well.  I've seen this nonsense before and it's tiresome.
   
Incidentally, the query tool they used, queryperf, has been changed
to dnsperf.  Someone needs to look at that port -
   /usr/ports/dns/dnsperf -
as it has a build depend of bind9 - well bind 9.3.4 is part of
   6.3-RELEASE
and I was rather irked when I ran the dnsperf port maker and the
maker stupidly began the process of downloading and building the
same version of BIND that I was already running on my server.
   
   
* I am trying to understand what is different about the ISC
configuration but have not yet found the cause.
   
   
It's called Anti-FreeBSD bias.  You won't find anything.
   
   
e.g. NSD
(ports/dns/nsd) is a much faster and more scalable DNS server than BIND
(because it is better optimized for the smaller set of features it
supports).
   
   
   
When you make remarks like that it's no wonder ISC is in the business
of slamming FreeBSD.  People used to make the same claims about djbdns
but I noticed over the last few years they don't seem to be doing
that anymore.
   
If nsd is so much better than yank bind out of the base FreeBSD and
replace it with nsd.  Of course that will make more work for me
when I regen our nameservers here since nsd will be the first thing
on the rm list.
   
  
   Please save your rhetoric for some other forum.  The ISC folks have been
   working with us to understand what's going on.
 
  Did anyone try disabling the onboard NIC and put in an Intel
  Pro/1000 in the PCI express slot in the server and retest with
  both Linux and FreeBSD?  As I run Proliants for a living,
  this stuck out to me like a sore thumb.  The onboard NIC
  in the systems they used for the testbed is just shit.  Hell,
  just about anything Broadcom makes is shit.  They even managed
  to screw up the 3c905 ASIC when 3com switched to using them
  as the supplier (from Lucent)( - I've watched those card versions
  panic Linux systems and drop massive packets in FreeBSD,
  when the Lucent-made chipped cards worked fine.
 
   I'm not aware of any
   anit-FreeBSD slams going on; mostly uninformed comments.
  
 
  It's customary in the industry before publishing rather unflattering
  results to call in the team in charge of the unflattering
  product and give them a chance to verify that the tester
  really knew what they were doing.
 
  FreeBSD has got slammed a number of times in the past by
  testers who didn't do this.  In fact as I recall the impetus
  for fixing the
  extended greater than 16MB memory test was due to a
  slam in a trade rag from a tester who didn't bother
  recompiling the FreeBSD kernel to recognize the complete
  amount of ram in the server, and running it up against Linux.
 
  Maybe I am wrong and the ISC team did in fact call you guys
  in before publishing the results - but the wording of
  the entire site (not just the test results) indicated
  they did their testing and informed FreeBSD after the fact.
  after publishing.  Not nice.
 
  Ted
 
 
 A weakness of freebsd is its fussyness over hardware in particular
 network cards, time and time again I see posts here telling people to
 go out buying expensive intel pro 1000 cards just so they can use the
 operating system properly when I think its reasonable to expect

Re: FreeBSD bind performance in FreeBSD 7

2008-02-29 Thread Chris
On 29/02/2008, Ted Mittelstaedt [EMAIL PROTECTED] wrote:


  -Original Message-
  From: [EMAIL PROTECTED]
  [mailto:[EMAIL PROTECTED] Behalf Of Sam Leffler
  Sent: Wednesday, February 27, 2008 8:54 AM
  To: Ted Mittelstaedt
  Cc: [EMAIL PROTECTED]; Kris Kennaway; Oliver Herold;
  freebsd-questions@freebsd.org
  Subject: Re: FreeBSD bind performance in FreeBSD 7
 
 
  Ted Mittelstaedt wrote:
  
   -Original Message-
   From: [EMAIL PROTECTED]
   [mailto:[EMAIL PROTECTED] Behalf Of Kris Kennaway
   Sent: Monday, February 25, 2008 12:18 PM
   To: Oliver Herold; freebsd-questions@freebsd.org;
   [EMAIL PROTECTED]
   Subject: Re: FreeBSD bind performance in FreeBSD 7
  
  
   Oliver Herold wrote:
  
   Hi,
  
   I saw this bind benchmarks just some minutes ago,
  
   http://new.isc.org/proj/dnsperf/OStest.html
  
   is this true for FreeBSD 7 (current state: RELENG_7/7.0R) too? Or is
   this something verified only for the state of development
  back in August
   2007?
  
   I have been trying to replicate this.  ISC have kindly given me access
   to their test data but I am seeing Linux performing much slower than
   FreeBSD with the same ISC workload.
  
  
  
   Kris,
  
 Every couple years we go through this with ISC.  They come out with
   a new version of BIND then claim that nothing other than Linux can
   run it well.  I've seen this nonsense before and it's tiresome.
  
   Incidentally, the query tool they used, queryperf, has been changed
   to dnsperf.  Someone needs to look at that port -
  /usr/ports/dns/dnsperf -
   as it has a build depend of bind9 - well bind 9.3.4 is part of
  6.3-RELEASE
   and I was rather irked when I ran the dnsperf port maker and the
   maker stupidly began the process of downloading and building the
   same version of BIND that I was already running on my server.
  
  
   * I am trying to understand what is different about the ISC
   configuration but have not yet found the cause.
  
  
   It's called Anti-FreeBSD bias.  You won't find anything.
  
  
   e.g. NSD
   (ports/dns/nsd) is a much faster and more scalable DNS server than BIND
   (because it is better optimized for the smaller set of features it
   supports).
  
  
  
   When you make remarks like that it's no wonder ISC is in the business
   of slamming FreeBSD.  People used to make the same claims about djbdns
   but I noticed over the last few years they don't seem to be doing
   that anymore.
  
   If nsd is so much better than yank bind out of the base FreeBSD and
   replace it with nsd.  Of course that will make more work for me
   when I regen our nameservers here since nsd will be the first thing
   on the rm list.
  
 
  Please save your rhetoric for some other forum.  The ISC folks have been
  working with us to understand what's going on.

 Did anyone try disabling the onboard NIC and put in an Intel
 Pro/1000 in the PCI express slot in the server and retest with
 both Linux and FreeBSD?  As I run Proliants for a living,
 this stuck out to me like a sore thumb.  The onboard NIC
 in the systems they used for the testbed is just shit.  Hell,
 just about anything Broadcom makes is shit.  They even managed
 to screw up the 3c905 ASIC when 3com switched to using them
 as the supplier (from Lucent)( - I've watched those card versions
 panic Linux systems and drop massive packets in FreeBSD,
 when the Lucent-made chipped cards worked fine.

  I'm not aware of any
  anit-FreeBSD slams going on; mostly uninformed comments.
 

 It's customary in the industry before publishing rather unflattering
 results to call in the team in charge of the unflattering
 product and give them a chance to verify that the tester
 really knew what they were doing.

 FreeBSD has got slammed a number of times in the past by
 testers who didn't do this.  In fact as I recall the impetus
 for fixing the
 extended greater than 16MB memory test was due to a
 slam in a trade rag from a tester who didn't bother
 recompiling the FreeBSD kernel to recognize the complete
 amount of ram in the server, and running it up against Linux.

 Maybe I am wrong and the ISC team did in fact call you guys
 in before publishing the results - but the wording of
 the entire site (not just the test results) indicated
 they did their testing and informed FreeBSD after the fact.
 after publishing.  Not nice.

 Ted


A weakness of freebsd is its fussyness over hardware in particular
network cards, time and time again I see posts here telling people to
go out buying expensive intel pro 1000 cards just so they can use the
operating system properly when I think its reasonable to expect
mainstream hardware to work, eg. realtek is mainstream and common as a
onboard nic but the support in freebsd is poor and only serving
datacentres to shy away from freebsd.  If the same hardware performs
better in linux then the hardware isnt to blame for worser performance
in fbsd.

Chris
___
freebsd-questions

Re: FreeBSD bind performance in FreeBSD 7

2008-02-29 Thread Mike Tancsa

At 10:44 AM 2/29/2008, Chris wrote:


A weakness of freebsd is its fussyness over hardware in particular
network cards, time and time again I see posts here telling people to
go out buying expensive intel pro 1000 cards just so they can use the
operating system properly when I think its reasonable to expect
mainstream hardware to work, eg. realtek is mainstream and common as a


A realtek as in rl (not re) works quite well (as in stable, 
predictable performance)-- we buy these for about $5 each from our 
supplier and are quite common.  While it would be nice that all 
network cards worked as well as the em nics, its an issue that is 
easy to work around-- after all, I would rather be limited by my nic 
driver choice as opposed to vm and network stack issues which I cant 
work around.  Also thankfully, a large chunk of the server MB market 
uses em nics.  Yes, bge/bce based nics do seem to perform poorly on 
FreeBSD.  Hopefully Broadcom might put similar resources into driver 
development as Intel does/has.


---Mike 


___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: FreeBSD bind performance in FreeBSD 7

2008-02-29 Thread Chris
On 29/02/2008, Tom Evans [EMAIL PROTECTED] wrote:
 On Fri, 2008-02-29 at 15:44 +, Chris wrote:
  On 29/02/2008, Ted Mittelstaedt [EMAIL PROTECTED] wrote:
 
  A weakness of freebsd is its fussyness over hardware in particular
  network cards, time and time again I see posts here telling people to
  go out buying expensive intel pro 1000 cards just so they can use the
  operating system properly when I think its reasonable to expect
  mainstream hardware to work, eg. realtek is mainstream and common as a
  onboard nic but the support in freebsd is poor and only serving
  datacentres to shy away from freebsd.  If the same hardware performs
  better in linux then the hardware isnt to blame for worser performance
  in fbsd.
 
  Chris

 Not to come down too hard on you, but the reason why Pro/1000 chipsets
 are reasonably pricey, and uncommon to find as an integrated NIC, except
 on server boards or intel own brand mobos, is that it is decent
 hardware, and hence costs real money to use. Consumer NICs like Realtek,
 Via Rhine and (imo) Marvell are cheap tat that 'just about' works, until
 you put it under heavy stress. I have encountered a series of Marvell
 based chips on my personal home computers that work about as well as a
 slap around the face. Also, even from the 'good' manufacturers, like
 broadcom and intel, you have 'consumer' parts, which are reasonably
 cheap, like bge(4) supported parts, and 'professional' parts, like
 bce(4) parts. One should work fine under moderate load, one should work
 fine under heavy load. One will cost $4, one will cost $100.

 I'm not saying the drivers are bug-free, but if you want performance and
 reliability, you get an em(4) or another professional chipset. Only a
 few months ago at work, we had to  order around 75 Pro/1000s as we had
 had enough of crashes from our bce(4) based integrated NICs on our Dell
 2950s. Fortunately for our wallet, we managed to fix the issues in the
 driver/hardware before our supplier could source that many - thanks
 David Christensen!

 Personally, I wouldn't put something in a data-centre with only a vr(4)
 or re(4), regardless of OS.

 Tom




You working round what I just said.  A nic should perform equally well
as it does in other operating systems just because its cheaper its not
an excuse for buggy performance.  There is also other good network
cards apart from intel pro 1000.  I am talking about stability not
performance, I expect a intel pro 1000 to outperform a realtek however
I expect both to be stable in terms of connectivity.  I expect a
realtek in freebsd to perform as well as a realtek in windows and
linux. :)

We have our own opinions but for many tasks a vr re bge etc. even a rl
does the job its required just fine.  I have seen linux servers using
rl adaptors outperform freebsd servers with superior cards because the
linux driver is better.  I do agree its a sad state of affairs
datacentres like to rent out servers built from desktop parts but
unfurtenatly thats the market for you unless paying a premium or going
with own hardware colocated.

Chris
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


RE: FreeBSD bind performance in FreeBSD 7

2008-02-29 Thread Ted Mittelstaedt


 -Original Message-
 From: Chris [mailto:[EMAIL PROTECTED]
 Sent: Friday, February 29, 2008 7:45 AM
 To: Ted Mittelstaedt
 Cc: Sam Leffler; [EMAIL PROTECTED]; Oliver Herold; Kris
 Kennaway; freebsd-questions@freebsd.org
 Subject: Re: FreeBSD bind performance in FreeBSD 7
 

 A weakness of freebsd is its fussyness over hardware in particular
 network cards, time and time again I see posts here telling people to
 go out buying expensive intel pro 1000 cards just so they can use the
 operating system properly when I think its reasonable to expect
 mainstream hardware to work, eg. realtek is mainstream and common as a
 onboard nic but the support in freebsd is poor and only serving
 datacentres to shy away from freebsd.  If the same hardware performs
 better in linux then the hardware isnt to blame for worser performance
 in fbsd.
 

Device drivers and hardware are a cooperative effort.  The ideal
is a well-written device driver and well-designed hardware.
Unfortunately the reality of it appears to be that it costs
a LOT more money to hire good silicon designers than it costs
to hire good programmers - so a depressing amount of computer
hardware out there is very poor hardware, but the hardware's
shortcomings are made up by almost Herculean efforts of the
software developers.

I should have thought the invention of the Winmodem (windows-only
modem) would have made this obvious to the general public
years ago.

Unfortunately, the hardware vendors make a lot of effort to
conceal the crappiness of their designs and most customers
just care if the device works, they don't care if the only
way the device can work is if 60% of their system's CPU is
tied up servicing a device driver that is making up for
hardware shortcomings, so it is still rather difficult
for a customer to become informed about what is good and
what isn't - other than trial and error.

I hardly think that the example I cited - the 3com 3c905 PCI
network adapter - is an example of poor support in FreeBSD.
The FreeBSD driver for the 509 worked perfectly well when
the 309 used a Lucent-built ASIC.  When 3com decided to
save 50 cents a card by switching to Broadcom for the
ASIC manufacturing, the FreeBSD driver didn't work very
well with those cards - nor did the Linux driver for that
matter.  This clearly wasn't a driver problem it was a
problem with Broadcom not following 3com's design specs
properly.  3com did the only thing they could - which
was to put a hack into the Windows driver - but of course,
nobody bothered telling the Linux or FreeBSD community
about it, we had to find out by dicking around with the
driver code.

If datacenters want to purchase poor hardware and run their
stuff on it, that's their choice.  Just because a piece
of hardware is mainstream doesen't mean it's good.  It
mainly means it's inexpensive.

Ted
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: FreeBSD bind performance in FreeBSD 7

2008-02-29 Thread Tom Evans
On Fri, 2008-02-29 at 15:44 +, Chris wrote:
 On 29/02/2008, Ted Mittelstaedt [EMAIL PROTECTED] wrote:
 
 A weakness of freebsd is its fussyness over hardware in particular
 network cards, time and time again I see posts here telling people to
 go out buying expensive intel pro 1000 cards just so they can use the
 operating system properly when I think its reasonable to expect
 mainstream hardware to work, eg. realtek is mainstream and common as a
 onboard nic but the support in freebsd is poor and only serving
 datacentres to shy away from freebsd.  If the same hardware performs
 better in linux then the hardware isnt to blame for worser performance
 in fbsd.
 
 Chris

Not to come down too hard on you, but the reason why Pro/1000 chipsets
are reasonably pricey, and uncommon to find as an integrated NIC, except
on server boards or intel own brand mobos, is that it is decent
hardware, and hence costs real money to use. Consumer NICs like Realtek,
Via Rhine and (imo) Marvell are cheap tat that 'just about' works, until
you put it under heavy stress. I have encountered a series of Marvell
based chips on my personal home computers that work about as well as a
slap around the face. Also, even from the 'good' manufacturers, like
broadcom and intel, you have 'consumer' parts, which are reasonably
cheap, like bge(4) supported parts, and 'professional' parts, like
bce(4) parts. One should work fine under moderate load, one should work
fine under heavy load. One will cost $4, one will cost $100.

I'm not saying the drivers are bug-free, but if you want performance and
reliability, you get an em(4) or another professional chipset. Only a
few months ago at work, we had to  order around 75 Pro/1000s as we had
had enough of crashes from our bce(4) based integrated NICs on our Dell
2950s. Fortunately for our wallet, we managed to fix the issues in the
driver/hardware before our supplier could source that many - thanks
David Christensen!

Personally, I wouldn't put something in a data-centre with only a vr(4)
or re(4), regardless of OS. 

Tom



signature.asc
Description: This is a digitally signed message part


Re: FreeBSD bind performance in FreeBSD 7

2008-02-29 Thread Fred C


On Feb 29, 2008, at 7:44 AM, Chris wrote:



A weakness of freebsd is its fussyness over hardware in particular
network cards, time and time again I see posts here telling people to
go out buying expensive intel pro 1000 cards just so they can use the
operating system properly when I think its reasonable to expect
mainstream hardware to work, eg. realtek is mainstream and common as a
onboard nic but the support in freebsd is poor and only serving
datacentres to shy away from freebsd.  If the same hardware performs
better in linux then the hardware isnt to blame for worser performance
in fbsd.



The weakness comes mainly from the hardware.

It is like Nascar, you don't run Nascar in your everyday Prius. You  
need a car with stronger and ultra performing components. Your Prius  
maybe fine for your commute and your grocery shopping, but when it  
comes to a race it will perform very badly.


Here the problem is the same. For your everyday home desktop machine  
any low end network card is fine. But when you want to handle several  
thousand connections per seconds you need some some hardware who can  
handle it.


--
Fred C!
PGP-KeyID: E7EA02EC3B487EE9
PGP-FingerPrint: A906101E2CCDBB18D7BD09AEE7EA02EC3B487EE9



___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: FreeBSD bind performance in FreeBSD 7

2008-02-29 Thread Adrian Chadd
On 01/03/2008, Chris [EMAIL PROTECTED] wrote:

 You working round what I just said.  A nic should perform equally well
  as it does in other operating systems just because its cheaper its not
  an excuse for buggy performance.  There is also other good network
  cards apart from intel pro 1000.  I am talking about stability not
  performance, I expect a intel pro 1000 to outperform a realtek however
  I expect both to be stable in terms of connectivity.  I expect a
  realtek in freebsd to perform as well as a realtek in windows and
  linux. :)

Patches please!


Adrian


-- 
Adrian Chadd - [EMAIL PROTECTED]
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: FreeBSD bind performance in FreeBSD 7

2008-02-29 Thread Chris
On 01/03/2008, Adrian Chadd [EMAIL PROTECTED] wrote:
 On 01/03/2008, Chris [EMAIL PROTECTED] wrote:

  You working round what I just said.  A nic should perform equally well
   as it does in other operating systems just because its cheaper its not
   an excuse for buggy performance.  There is also other good network
   cards apart from intel pro 1000.  I am talking about stability not
   performance, I expect a intel pro 1000 to outperform a realtek however
   I expect both to be stable in terms of connectivity.  I expect a
   realtek in freebsd to perform as well as a realtek in windows and
   linux. :)

 Patches please!


 Adrian


 --
 Adrian Chadd - [EMAIL PROTECTED]


Ironically the latest server I got last night has a intel pro 1000 a rarity :)

I am just giving feedback as when I speak to people in the datacentre
and hosting business the biggest gripe with freebsd is hardware
compatability, as I adore freebsd I ignore this and work round it but
its defenitly reducing take up.

Of course I know current re issues are getting attention which I am
thankful for, I fully understand the time and effort required to write
drivers patches etc. and have got no critisicms for the people who do
this my complaint is more focused on people claiming there is no
issues its just the hardware.

Thanks

Chris
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: FreeBSD bind performance in FreeBSD 7

2008-02-28 Thread Kris Kennaway

Adrian Chadd wrote:

(Sorry for top posting.)

Its not actually -that- bad an idea to compare different applications.
It sets the bar for how far the entire system {hardware, OS,
application, network} can be pushed.

If nsd beats bind9 by say 5 or 10% over all, then its nothing to write
home about. If nsd beats bind9 by 50% and shows similar
kernel/interrupt space time use then thats something to stare at. Even
if its just because nsd 'does less' and gives more CPU time to
system/interrupt processing you've identified that the system -can- be
pushed harder, and perhaps working with the bind9 guys a little more
can identify what they're doing wrong.

Thats how I noticed the performance differences between various
platforms running Squid a few years ago - for example, gettimeofday()
being called way, way too frequently - and I compare Squid's
kernel/interrupt time; syscall footprint; hwpmc/oprofile traces; etc
against other proxy-capable applications (varnish, lighttpd, apache)
to see exactly what they're doing differently.


Yep, and in this case NSD is currently 90% faster with prospects to push 
it even higher with some further kernel changes (so far we have improved 
it by 45%).  BIND is limited by its own architecture, so improvements 
cannot be made by modifying the kernel.


Anyway, the motivation here is not a DNS deathmatch, but part of our 
ongoing effort to look for aspects of FreeBSD performance that can be 
improved.  Currently we are looking at UDP performance, and DNS serving 
was thought to be a good model for that.  It turns out that BIND does 
not stress the kernel, but NSD does.


Kris

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


RE: FreeBSD bind performance in FreeBSD 7

2008-02-28 Thread Ted Mittelstaedt


 -Original Message-
 From: [EMAIL PROTECTED]
 [mailto:[EMAIL PROTECTED] Behalf Of Sam Leffler
 Sent: Wednesday, February 27, 2008 8:54 AM
 To: Ted Mittelstaedt
 Cc: [EMAIL PROTECTED]; Kris Kennaway; Oliver Herold;
 freebsd-questions@freebsd.org
 Subject: Re: FreeBSD bind performance in FreeBSD 7


 Ted Mittelstaedt wrote:
 
  -Original Message-
  From: [EMAIL PROTECTED]
  [mailto:[EMAIL PROTECTED] Behalf Of Kris Kennaway
  Sent: Monday, February 25, 2008 12:18 PM
  To: Oliver Herold; freebsd-questions@freebsd.org;
  [EMAIL PROTECTED]
  Subject: Re: FreeBSD bind performance in FreeBSD 7
 
 
  Oliver Herold wrote:
 
  Hi,
 
  I saw this bind benchmarks just some minutes ago,
 
  http://new.isc.org/proj/dnsperf/OStest.html
 
  is this true for FreeBSD 7 (current state: RELENG_7/7.0R) too? Or is
  this something verified only for the state of development
 back in August
  2007?
 
  I have been trying to replicate this.  ISC have kindly given me access
  to their test data but I am seeing Linux performing much slower than
  FreeBSD with the same ISC workload.
 
 
 
  Kris,
 
Every couple years we go through this with ISC.  They come out with
  a new version of BIND then claim that nothing other than Linux can
  run it well.  I've seen this nonsense before and it's tiresome.
 
  Incidentally, the query tool they used, queryperf, has been changed
  to dnsperf.  Someone needs to look at that port -
 /usr/ports/dns/dnsperf -
  as it has a build depend of bind9 - well bind 9.3.4 is part of
 6.3-RELEASE
  and I was rather irked when I ran the dnsperf port maker and the
  maker stupidly began the process of downloading and building the
  same version of BIND that I was already running on my server.
 
 
  * I am trying to understand what is different about the ISC
  configuration but have not yet found the cause.
 
 
  It's called Anti-FreeBSD bias.  You won't find anything.
 
 
  e.g. NSD
  (ports/dns/nsd) is a much faster and more scalable DNS server than BIND
  (because it is better optimized for the smaller set of features it
  supports).
 
 
 
  When you make remarks like that it's no wonder ISC is in the business
  of slamming FreeBSD.  People used to make the same claims about djbdns
  but I noticed over the last few years they don't seem to be doing
  that anymore.
 
  If nsd is so much better than yank bind out of the base FreeBSD and
  replace it with nsd.  Of course that will make more work for me
  when I regen our nameservers here since nsd will be the first thing
  on the rm list.
 

 Please save your rhetoric for some other forum.  The ISC folks have been
 working with us to understand what's going on.

Did anyone try disabling the onboard NIC and put in an Intel
Pro/1000 in the PCI express slot in the server and retest with
both Linux and FreeBSD?  As I run Proliants for a living,
this stuck out to me like a sore thumb.  The onboard NIC
in the systems they used for the testbed is just shit.  Hell,
just about anything Broadcom makes is shit.  They even managed
to screw up the 3c905 ASIC when 3com switched to using them
as the supplier (from Lucent)( - I've watched those card versions
panic Linux systems and drop massive packets in FreeBSD,
when the Lucent-made chipped cards worked fine.

 I'm not aware of any
 anit-FreeBSD slams going on; mostly uninformed comments.


It's customary in the industry before publishing rather unflattering
results to call in the team in charge of the unflattering
product and give them a chance to verify that the tester
really knew what they were doing.

FreeBSD has got slammed a number of times in the past by
testers who didn't do this.  In fact as I recall the impetus
for fixing the
extended greater than 16MB memory test was due to a
slam in a trade rag from a tester who didn't bother
recompiling the FreeBSD kernel to recognize the complete
amount of ram in the server, and running it up against Linux.

Maybe I am wrong and the ISC team did in fact call you guys
in before publishing the results - but the wording of
the entire site (not just the test results) indicated
they did their testing and informed FreeBSD after the fact.
after publishing.  Not nice.

Ted

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


RE: FreeBSD bind performance in FreeBSD 7

2008-02-28 Thread Ted Mittelstaedt


 -Original Message-
 From: [EMAIL PROTECTED]
 [mailto:[EMAIL PROTECTED] Behalf Of Kris Kennaway
 Sent: Wednesday, February 27, 2008 2:57 AM
 To: Ted Mittelstaedt
 Cc: Oliver Herold; freebsd-questions@freebsd.org
 Subject: Re: FreeBSD bind performance in FreeBSD 7


  * I am trying to understand what is different about the ISC
  configuration but have not yet found the cause.
 
  It's called Anti-FreeBSD bias.  You won't find anything.

 This is false, but I didnt expect any better from you.

 ISC widely rely on FreeBSD internally, and contribute *lots* of
 resources to the FreeBSD project including hosting one half of
 ftp.freebsd.org and employing several FreeBSD developers.


So what?  Microsoft has used FreeBSD in the past for it's
DNS servers, and as far as I know still uses Linux or BSD for
the nameservers for their download sites (or rather, the
outsourcer they use doesen't use Windows for it's DNS) and
they have never had anything good to say about FreeBSD -
with the exception of the version 1 port of C# to it (which
they dropped in verison 2)  When Microsoft took over
Hotmail, Hotmail was completely running on FreeBSD and
several leaked internal documents showed many internal
Microsoft people were highly impressed by FreeBSD when
they got into it, but that didn't stop Microsoft from
publically castigating FreeBSD in it's online how we
migrated Hotmail to the (superior) Windows platform
whitepapers.

Apple's dependence on FreeBSD is legendary - yet Steve
Jobs has several times at MacWorld referred to Darwin as
based on Linux and that it's Linux-like, and similar to Linux,
all of which are baldfaced lies.  (At least, according
to the Apple website which credits FreeBSD here:
http://developer.apple.com/opensource/index.html)

The point here is there are MANY organizations that
publically beat the Linux bandwagon drum yet privately
don't use Linux as much as they use FreeBSD internally

This study of theirs on http://new.isc.org/proj/dnsperf/
is the proof of the pudding.  I also noticed according to
the testbed they are using HP Proliant DL140 G3 servers -
those servers use El-crappy Broadcom 5722 ethernet chips on
their motherboard, and the FreeBSD driver for these
chips is iffy - FreeBSD 6.1 in fact paniced when using
this chip family, as I documented in a PR for an HP Proliant
server.  And, HP supports and supplies the RedHat Linux
driver for this chipset for this server, and there's no
question that Gentoo uses the same driver.  I can
hardly think of a more unfair testbed that is more tilted
towards Linux than these servers.

But that's OK you continue rooting around in the FreeBSD 7
kernel all you want, don't bother actually looking at the
network hardware, we all know it doesen't matter.NOT!

Consider also that ISC is 501(c)(3)  The money they
are spending on employing FreeBSD developers and hosting
ftp.freebsd.org isn't theirs.  It's donated to them
specifically to be used for these purposes.

  e.g. NSD
  (ports/dns/nsd) is a much faster and more scalable DNS server than BIND
  (because it is better optimized for the smaller set of features it
  supports).
 
 
  When you make remarks like that it's no wonder ISC is in the business
  of slamming FreeBSD.  People used to make the same claims about djbdns
  but I noticed over the last few years they don't seem to be doing
  that anymore.

 What, you mean factual statements?  NSD *is* faster, it *is* more
 scalable, it *does* support fewer features than BIND, and it *is* more
 optimized for those features (e.g. it tries to precompute DNS responses,
 which it can do because it doesn't support dynamic updates, etc).  The
 ISC devels acknowledge this.  BIND has architectural constraints from
 being a more complete DNS server solution.


You could have just as easily said a more feature-lacking,
stripped-down nameserver like nsd is faster.  That's factual
too.  Your even doing it now; architectural constraints?
Look at the language the ISC uses to describe it's server:
http://www.isc.org/index.pl?/about/press/?pr=2007032700
I see fastest version yet not slower than other nameservers

Granted, neither group is making money on the nameserver
software, it's not like money is at stake here.  But, pride
is.

Ted

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: FreeBSD bind performance in FreeBSD 7

2008-02-27 Thread Kris Kennaway

Ted Mittelstaedt wrote:



-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] Behalf Of Kris Kennaway
Sent: Monday, February 25, 2008 12:18 PM
To: Oliver Herold; freebsd-questions@freebsd.org;
[EMAIL PROTECTED]
Subject: Re: FreeBSD bind performance in FreeBSD 7


Oliver Herold wrote:

Hi,

I saw this bind benchmarks just some minutes ago,

http://new.isc.org/proj/dnsperf/OStest.html

is this true for FreeBSD 7 (current state: RELENG_7/7.0R) too? Or is
this something verified only for the state of development back in August
2007?

I have been trying to replicate this.  ISC have kindly given me access
to their test data but I am seeing Linux performing much slower than
FreeBSD with the same ISC workload.



Kris,

  Every couple years we go through this with ISC.  They come out with
a new version of BIND then claim that nothing other than Linux can
run it well.  I've seen this nonsense before and it's tiresome.

Incidentally, the query tool they used, queryperf, has been changed
to dnsperf.  Someone needs to look at that port - /usr/ports/dns/dnsperf -
as it has a build depend of bind9 - well bind 9.3.4 is part of 6.3-RELEASE
and I was rather irked when I ran the dnsperf port maker and the
maker stupidly began the process of downloading and building the
same version of BIND that I was already running on my server.

* I am trying to understand what is different about the ISC
configuration but have not yet found the cause.


It's called Anti-FreeBSD bias.  You won't find anything.


This is false, but I didnt expect any better from you.

ISC widely rely on FreeBSD internally, and contribute *lots* of 
resources to the FreeBSD project including hosting one half of 
ftp.freebsd.org and employing several FreeBSD developers.



e.g. NSD
(ports/dns/nsd) is a much faster and more scalable DNS server than BIND
(because it is better optimized for the smaller set of features it
supports).



When you make remarks like that it's no wonder ISC is in the business
of slamming FreeBSD.  People used to make the same claims about djbdns
but I noticed over the last few years they don't seem to be doing
that anymore.


What, you mean factual statements?  NSD *is* faster, it *is* more 
scalable, it *does* support fewer features than BIND, and it *is* more 
optimized for those features (e.g. it tries to precompute DNS responses, 
which it can do because it doesn't support dynamic updates, etc).  The 
ISC devels acknowledge this.  BIND has architectural constraints from 
being a more complete DNS server solution.



If nsd is so much better than yank bind out of the base FreeBSD and
replace it with nsd.  Of course that will make more work for me
when I regen our nameservers here since nsd will be the first thing
on the rm list.


You're funny, Ted.  Somehow you got out of my killfile though, guess 
I'll fix that.


Kris
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: FreeBSD bind performance in FreeBSD 7

2008-02-27 Thread Sam Leffler

Ted Mittelstaedt wrote:
  

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] Behalf Of Kris Kennaway
Sent: Monday, February 25, 2008 12:18 PM
To: Oliver Herold; freebsd-questions@freebsd.org;
[EMAIL PROTECTED]
Subject: Re: FreeBSD bind performance in FreeBSD 7


Oliver Herold wrote:


Hi,

I saw this bind benchmarks just some minutes ago,

http://new.isc.org/proj/dnsperf/OStest.html

is this true for FreeBSD 7 (current state: RELENG_7/7.0R) too? Or is
this something verified only for the state of development back in August
2007?
  

I have been trying to replicate this.  ISC have kindly given me access
to their test data but I am seeing Linux performing much slower than
FreeBSD with the same ISC workload.




Kris,

  Every couple years we go through this with ISC.  They come out with
a new version of BIND then claim that nothing other than Linux can
run it well.  I've seen this nonsense before and it's tiresome.

Incidentally, the query tool they used, queryperf, has been changed
to dnsperf.  Someone needs to look at that port - /usr/ports/dns/dnsperf -
as it has a build depend of bind9 - well bind 9.3.4 is part of 6.3-RELEASE
and I was rather irked when I ran the dnsperf port maker and the
maker stupidly began the process of downloading and building the
same version of BIND that I was already running on my server.

  

* I am trying to understand what is different about the ISC
configuration but have not yet found the cause.



It's called Anti-FreeBSD bias.  You won't find anything.

  

e.g. NSD
(ports/dns/nsd) is a much faster and more scalable DNS server than BIND
(because it is better optimized for the smaller set of features it
supports).




When you make remarks like that it's no wonder ISC is in the business
of slamming FreeBSD.  People used to make the same claims about djbdns
but I noticed over the last few years they don't seem to be doing
that anymore.

If nsd is so much better than yank bind out of the base FreeBSD and
replace it with nsd.  Of course that will make more work for me
when I regen our nameservers here since nsd will be the first thing
on the rm list.
  


Please save your rhetoric for some other forum.  The ISC folks have been 
working with us to understand what's going on.  I'm not aware of any 
anit-FreeBSD slams going on; mostly uninformed comments.


We believe FreeBSD does very well in any comparisons of the sort being 
discussed and there's still lots of room for improvement.


As to nsd vs bind, understand they are very different applications w/ 
totally different goals.  Comparing performance is not entirely fair and 
certainly is difficult.  Kris investigated the performance of nsd mostly 
to understand how bind might scale if certain architectural changes were 
made to eliminate known bottlenecks in the application.


   Sam
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: FreeBSD bind performance in FreeBSD 7

2008-02-27 Thread Adrian Chadd
(Sorry for top posting.)

Its not actually -that- bad an idea to compare different applications.
It sets the bar for how far the entire system {hardware, OS,
application, network} can be pushed.

If nsd beats bind9 by say 5 or 10% over all, then its nothing to write
home about. If nsd beats bind9 by 50% and shows similar
kernel/interrupt space time use then thats something to stare at. Even
if its just because nsd 'does less' and gives more CPU time to
system/interrupt processing you've identified that the system -can- be
pushed harder, and perhaps working with the bind9 guys a little more
can identify what they're doing wrong.

Thats how I noticed the performance differences between various
platforms running Squid a few years ago - for example, gettimeofday()
being called way, way too frequently - and I compare Squid's
kernel/interrupt time; syscall footprint; hwpmc/oprofile traces; etc
against other proxy-capable applications (varnish, lighttpd, apache)
to see exactly what they're doing differently.

2c,



adrian


On 28/02/2008, Sam Leffler [EMAIL PROTECTED] wrote:
 Ted Mittelstaedt wrote:
  
   -Original Message-
   From: [EMAIL PROTECTED]
   [mailto:[EMAIL PROTECTED] Behalf Of Kris Kennaway
   Sent: Monday, February 25, 2008 12:18 PM
   To: Oliver Herold; freebsd-questions@freebsd.org;
   [EMAIL PROTECTED]
   Subject: Re: FreeBSD bind performance in FreeBSD 7
  
  
   Oliver Herold wrote:
  
   Hi,
  
   I saw this bind benchmarks just some minutes ago,
  
   http://new.isc.org/proj/dnsperf/OStest.html
  
   is this true for FreeBSD 7 (current state: RELENG_7/7.0R) too? Or is
   this something verified only for the state of development back in August
   2007?
  
   I have been trying to replicate this.  ISC have kindly given me access
   to their test data but I am seeing Linux performing much slower than
   FreeBSD with the same ISC workload.
  
  
  
   Kris,
  
 Every couple years we go through this with ISC.  They come out with
   a new version of BIND then claim that nothing other than Linux can
   run it well.  I've seen this nonsense before and it's tiresome.
  
   Incidentally, the query tool they used, queryperf, has been changed
   to dnsperf.  Someone needs to look at that port - /usr/ports/dns/dnsperf -
   as it has a build depend of bind9 - well bind 9.3.4 is part of 6.3-RELEASE
   and I was rather irked when I ran the dnsperf port maker and the
   maker stupidly began the process of downloading and building the
   same version of BIND that I was already running on my server.
  
  
   * I am trying to understand what is different about the ISC
   configuration but have not yet found the cause.
  
  
   It's called Anti-FreeBSD bias.  You won't find anything.
  
  
   e.g. NSD
   (ports/dns/nsd) is a much faster and more scalable DNS server than BIND
   (because it is better optimized for the smaller set of features it
   supports).
  
  
  
   When you make remarks like that it's no wonder ISC is in the business
   of slamming FreeBSD.  People used to make the same claims about djbdns
   but I noticed over the last few years they don't seem to be doing
   that anymore.
  
   If nsd is so much better than yank bind out of the base FreeBSD and
   replace it with nsd.  Of course that will make more work for me
   when I regen our nameservers here since nsd will be the first thing
   on the rm list.
  


 Please save your rhetoric for some other forum.  The ISC folks have been
  working with us to understand what's going on.  I'm not aware of any
  anit-FreeBSD slams going on; mostly uninformed comments.

  We believe FreeBSD does very well in any comparisons of the sort being
  discussed and there's still lots of room for improvement.

  As to nsd vs bind, understand they are very different applications w/
  totally different goals.  Comparing performance is not entirely fair and
  certainly is difficult.  Kris investigated the performance of nsd mostly
  to understand how bind might scale if certain architectural changes were
  made to eliminate known bottlenecks in the application.


 Sam

 ___
  [EMAIL PROTECTED] mailing list
  http://lists.freebsd.org/mailman/listinfo/freebsd-performance
  To unsubscribe, send any mail to [EMAIL PROTECTED]



-- 
Adrian Chadd - [EMAIL PROTECTED]
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: FreeBSD bind performance in FreeBSD 7

2008-02-26 Thread O. Hartmann

Kris Kennaway wrote:
[SCHNIPP]


* 7.0 with ULE has a bug on this workload (actually to do with workloads 
involving high interrupt rates).  It is fixed in 8.0.


will this patch also be available for 7.0?


Regards,
Oliver
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: FreeBSD bind performance in FreeBSD 7

2008-02-26 Thread Kris Kennaway

O. Hartmann wrote:

Kris Kennaway wrote:
[SCHNIPP]


* 7.0 with ULE has a bug on this workload (actually to do with 
workloads involving high interrupt rates).  It is fixed in 8.0.


will this patch also be available for 7.0?


If you mean will it be merged to RELENG_7, absolutely.  If you mean 
will an errata be released and merged to RELENG_7_0, that is up to the 
release engineers.


Kris
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


RE: FreeBSD bind performance in FreeBSD 7

2008-02-26 Thread Ted Mittelstaedt


 -Original Message-
 From: [EMAIL PROTECTED]
 [mailto:[EMAIL PROTECTED] Behalf Of Kris Kennaway
 Sent: Monday, February 25, 2008 12:18 PM
 To: Oliver Herold; freebsd-questions@freebsd.org;
 [EMAIL PROTECTED]
 Subject: Re: FreeBSD bind performance in FreeBSD 7


 Oliver Herold wrote:
  Hi,
 
  I saw this bind benchmarks just some minutes ago,
 
  http://new.isc.org/proj/dnsperf/OStest.html
 
  is this true for FreeBSD 7 (current state: RELENG_7/7.0R) too? Or is
  this something verified only for the state of development back in August
  2007?

 I have been trying to replicate this.  ISC have kindly given me access
 to their test data but I am seeing Linux performing much slower than
 FreeBSD with the same ISC workload.


Kris,

  Every couple years we go through this with ISC.  They come out with
a new version of BIND then claim that nothing other than Linux can
run it well.  I've seen this nonsense before and it's tiresome.

Incidentally, the query tool they used, queryperf, has been changed
to dnsperf.  Someone needs to look at that port - /usr/ports/dns/dnsperf -
as it has a build depend of bind9 - well bind 9.3.4 is part of 6.3-RELEASE
and I was rather irked when I ran the dnsperf port maker and the
maker stupidly began the process of downloading and building the
same version of BIND that I was already running on my server.


 * I am trying to understand what is different about the ISC
 configuration but have not yet found the cause.

It's called Anti-FreeBSD bias.  You won't find anything.

 e.g. NSD
 (ports/dns/nsd) is a much faster and more scalable DNS server than BIND
 (because it is better optimized for the smaller set of features it
 supports).


When you make remarks like that it's no wonder ISC is in the business
of slamming FreeBSD.  People used to make the same claims about djbdns
but I noticed over the last few years they don't seem to be doing
that anymore.

If nsd is so much better than yank bind out of the base FreeBSD and
replace it with nsd.  Of course that will make more work for me
when I regen our nameservers here since nsd will be the first thing
on the rm list.

Ted

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: FreeBSD bind performance in FreeBSD 7

2008-02-26 Thread Predrag Punosevac

Ted Mittelstaedt wrote:
  

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] Behalf Of Kris Kennaway
Sent: Monday, February 25, 2008 12:18 PM
To: Oliver Herold; freebsd-questions@freebsd.org;
[EMAIL PROTECTED]
Subject: Re: FreeBSD bind performance in FreeBSD 7


Oliver Herold wrote:


Hi,

I saw this bind benchmarks just some minutes ago,

http://new.isc.org/proj/dnsperf/OStest.html

is this true for FreeBSD 7 (current state: RELENG_7/7.0R) too? Or is
this something verified only for the state of development back in August
2007?
  

I have been trying to replicate this.  ISC have kindly given me access
to their test data but I am seeing Linux performing much slower than
FreeBSD with the same ISC workload.




Kris,

  Every couple years we go through this with ISC.  They come out with
a new version of BIND then claim that nothing other than Linux can
run it well.  I've seen this nonsense before and it's tiresome.

Incidentally, the query tool they used, queryperf, has been changed
to dnsperf.  Someone needs to look at that port - /usr/ports/dns/dnsperf -
as it has a build depend of bind9 - well bind 9.3.4 is part of 6.3-RELEASE
and I was rather irked when I ran the dnsperf port maker and the
maker stupidly began the process of downloading and building the
same version of BIND that I was already running on my server.

  

* I am trying to understand what is different about the ISC
configuration but have not yet found the cause.



It's called Anti-FreeBSD bias.  You won't find anything.

  
You just described the tests up to isomorphism in the terminology of 
mathematics which is more familiar

subject to me :-)

The results of OpenBSD has been discussed and analyzed on the
misc.at.openbsd.org. Even to a hobbyist  like myself was not clear why 
did they chose to test OpenBSD 4.1
when only in two month the stable version of OpenBSD will be 4.3. For 
those unfamiliar performance of
OpenBSD 4.2 as a DNS server has been dramatically improved from the 4.1 
version.
The  question  of multi-threading  (no-no in OpenBSD world) and its role 
in above results was also analyzed.




e.g. NSD
(ports/dns/nsd) is a much faster and more scalable DNS server than BIND
(because it is better optimized for the smaller set of features it
supports).




When you make remarks like that it's no wonder ISC is in the business
of slamming FreeBSD.  People used to make the same claims about djbdns
but I noticed over the last few years they don't seem to be doing
that anymore.

If nsd is so much better than yank bind out of the base FreeBSD and
replace it with nsd.  Of course that will make more work for me
when I regen our nameservers here since nsd will be the first thing
on the rm list.

  
I sincerely hope for the above. Hopefully Ted finally can buy that 
Mercedes to his wife which she deserves so much ;-) .


Cheers,
Predrag



Ted

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]
  


___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: FreeBSD bind performance in FreeBSD 7

2008-02-25 Thread Kris Kennaway

Oliver Herold wrote:

Hi,

I saw this bind benchmarks just some minutes ago,

http://new.isc.org/proj/dnsperf/OStest.html

is this true for FreeBSD 7 (current state: RELENG_7/7.0R) too? Or is
this something verified only for the state of development back in August
2007?


I have been trying to replicate this.  ISC have kindly given me access 
to their test data but I am seeing Linux performing much slower than 
FreeBSD with the same ISC workload.


  http://people.freebsd.org/~kris/scaling/bind-pt.png

Summary:

* FreeBSD 7.0-R with 4BSD scheduler has close to ideal scaling on this test.

* The drop above 6 threads is due to limitations within BIND.

* Linux 2.6.24 has about 35% lower performance than FreeBSD, which is 
significantly at variance with the ISC results.  It also doesn't scale 
above 3 CPUs.


* I am trying to understand what is different about the ISC 
configuration but have not yet found the cause.  They were testing 
2.6.20.7 so it is possible that there was a major regression before the 
2.6.22 and .24 kernels I tested.  Or maybe something is broken with the 
Intel gige driver in Linux (they were using broadcom hardware).  The 
graph is showing performance over 10ge, but I get the same peak 
performance over gige when I query from 2 clients (the client benchmark 
is very sensitive to network latency so a single client is not enough to 
saturate BIND over gige).


* 7.0 with ULE has a bug on this workload (actually to do with workloads 
involving high interrupt rates).  It is fixed in 8.0.


* Changes we have in progress to improve UDP performance do not help 
much with this particular workload (only about 5%), but with more 
scalable applications we see 30-40% improvement.  e.g. NSD 
(ports/dns/nsd) is a much faster and more scalable DNS server than BIND 
(because it is better optimized for the smaller set of features it 
supports).


Kris


___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: FreeBSD bind performance in FreeBSD 7

2008-02-25 Thread Chris

 * 7.0 with ULE has a bug on this workload (actually to do with workloads
 involving high interrupt rates).  It is fixed in 8.0.

Kris can you say anything more about interrupt workload bugs on ULE?
On all my 7.0 servers I now am using ULE even on the UP ones as it was
said there is slight improvements for UP also but all the machines can
get intterupts intensive, lots of high speed transfers using nic
interrupts.  In this scenario am I better of using 4BSD?

Chris
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: FreeBSD bind performance in FreeBSD 7

2008-02-25 Thread Kris Kennaway

Chris wrote:

* 7.0 with ULE has a bug on this workload (actually to do with workloads
involving high interrupt rates).  It is fixed in 8.0.


Kris can you say anything more about interrupt workload bugs on ULE?
On all my 7.0 servers I now am using ULE even on the UP ones as it was
said there is slight improvements for UP also but all the machines can
get intterupts intensive, lots of high speed transfers using nic
interrupts.  In this scenario am I better of using 4BSD?


I can't say for sure, you would have to do measurements of your 
throughput.  It probably won't matter on UP though.


Kris
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]