On Sat, Mar 21, 2009 at 6:00 PM, Lenny <[email protected]> wrote:
> Hi Bill,
<snip>
> Now, for the bad part. I got to a total of almost 50kpps, and that was via
> 70% CPU. Which probably means that at about 70kpps or so I'd hit 100%. Which
> actually was a lot like what you said about Xeons (you said they maxed out
> around 80kpps).
>
> Then I looked at the rates you provided and I just want to understand
> something. The emX taskq is supposed to take one of the available CPUs and
> probably stick with it, right? Then if on one of the interfaces you have a

That sounds about right.

> very high load, then this process will take a 100% of that CPU or core and
> it will hit the limit? Do I get this right? It means also that in your

Basically.

> situation , while you have only 14% load on the "general" CPU, the core that
> handles the em1 might actually be somewhere around 55% and the most it will
> take is about 70-80kpps. In that case, what is the solution? And if I'm

It's like a process, it should balance across cpu's, however it won't
thread across them.  ie. the taskq will only run on one cpu at any
given time.

> wrong, how helpful will it be for me to replace the server with the one like
> yours or similar? Will I benefit from more than 2 CPUs/cores? Just remember,
> all I need is a dual port NIC, which handles in and out - that's it.

I haven't benchmarked any Xeons in well over a year now, but when we
did, it was HP DL385G2's vs HP DL380 G5's - the Opterons (the 385G2's)
trounced the Intels - the Intels maxed at around 400kpps (the point we
started seeing packet loss), we ran out of test hardware at around
600kpps.  The newer model Xeons should be faster.

The other design decision we made was to go dual dual core instead of
a single quad core - given that we only had three interfaces in use on
most of our hardware, that gave us three cores handling the NICs and
one general purpose core.  Any more would have likely been overkill,
at least until FreeBSD 8.0.  The primary thought over the dual cpus vs
single was memory bandwidth to the cpu's - a quad core would have left
all four cores fighting for bandwidth (note: I did no real research
here, it was a "gut feel" decision).

> And the last question. I saw that even though you have Intel NICs, you still
> have interrupt on CPU. My RRD graphs show "0" on the interrupt. Is this
> normal? I don't have polling enabled.

This is probably due to the differences between FreeBSD 6.2 and 7.x
(in pfSense 1.2.x)

--Bill

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Commercial support available - https://portal.pfsense.org

Reply via email to