Oh yeah, sorry, forgot to mention.

The NIC is the same one: Dual Intel PCI-X.


Lenny.


Curtis LaMasters wrote:

I have forgotten and am too lazy to go through all my emails again to
read, but have you tried standard intel server NIC's for this?

Curtis LaMasters
http://www.curtis-lamasters.com
http://www.builtnetworks.com



On Wed, Jul 29, 2009 at 10:30 AM, Lenny<[email protected]> wrote:
Hi guys,

I know how sick of me you are by now, but I've had some developments here
and now I'm stuck again.

So, FINALLY I convinced the management to buy a new server. We bought an IBM
x3550 with 2 Quad Core CPUs E5420 and 2GB RAM PC2-5300 667MHz. Not just
that, we bought 2 of them ( we need the second one for the identical
project).

I really wanted to try the Yandex em driver, so I installed the "1.2.3-RC2
built on Wed Jun 24 10:37:51 EDT 2009 version.
These are the things I've changed:
/etc/sysctl.conf
added:
dev.em.0.rx_processing_limit=1000
dev.em.1.rx_processing_limit=1000

kern.ipc.somaxconn=1024

dev.em.0.rx_kthreads=2
dev.em.1.rx_kthreads=2

/boot/loader.conf
added:
hw.em.rxd="4096"
hw.em.txd="4096"

I wasn't sure about the kthreads, so I put "2" as a value, but I later read
that people use 6.
Now, for the problem:
It's the same! I had 50kpps, which was 180Mbps and I had 2 CPUs go up to
100%.
How is that even possible?

Should I give up on RC version and go back to stable, which also means
giving up on the em driver?
Any other things I can adjust?

By the way, I checked sysctl net.inet.ip.intr_queue_drops and it's "0".
On the interfaces I see that em0(outside) has 0 errors, but on the
em1(inside) there are 3666587/0.
6 of the CPUs(cores) are usually 100% idle, while the other 2 are actually
stuck with emX taskq.
In other words - nothing's changed. Except for 1 thing - the management
spent the money and now they wanna see results.
Please help!

Lenny.


changed:
net.inet.ip.intr_queue_maxlen=4096
(old value=1000)
On Thu, May 14, 2009 at 11:26 AM, Lenny <[email protected]> wrote:
Thanks for all the suggestions, guys.
Anyway, I found it very interesting that the new snapshots have yandex
driver in them, so I decided to try it.
Of course, as I don't have the new server yet, I had to try on my old IBM
x335.

But here are a couple of things that wouldn't let me try it:
with 2 SCSI hard drives in RAID 1 (LSI controller), it would always give
an error and reboot in 15 sec. If I take out 1 drive - it would boot, but
then get stuck on "configuring wan interface" for a long time and then
reboot. It's not connected, but previous pfSense versions had no problems
with it.

Any suggestions?

thanks,
Lenny.

P.S. we don't have eval-systems, so I'll have to take the risk and buy a
cat in the sack.

On Wed, May 13, 2009 at 6:13 PM, Scott Ullrich <[email protected]> wrote:
On Wed, May 13, 2009 at 10:21 AM, Rainer Duffner <[email protected]>
wrote:
AFAIK, SUN still provides eval-systems for free.
I would evaluate one of the new X2270 with the Nehalem Xeons.
This should provide a 50% boost even on 5400-series Xeons.
Also, they use Intel NICs, IIRC.

The smallest test-system already has 6 GB of RAM and costs 2000 USD,
which you have to pay only after 60 days.
All good advice here in the last couple messages.   Wanted to add that
I would suggest trying a recent 1.2 / FreeBSD 7 pfSense snapshot from
snapshots.pfsense.org as we added the high performance yandex driver
located at
http://people.yandex-team.ru/~wawa/em-6.9.6-RELENG7-yandex-1.36.2.10.tar.gz

The README file that was included:

Main features
-------------

RX queue is being processed w/more than one thread. Use "sysctl
dev.em.X.rx_kthreads" to alter number of threads.

TX interrupts has been removed because it's not neccessary actually.
That's why interrupt rate has been reduced twice at least.

TX queue cleaning moved to seperate kthread. em_start uses mtx_trylock
instean of mtx_lock. That's why em_start locks less.

+ RX queues' priority may be altered thru sysctl. System seems to be
more stable if RX scheduled w/less priority.

+ RX interrupt stay masked if there is no thread ready to catch
interrupt. The hint reduces context switching under load.

You will want to experiment with 1 thread per proc and 2 threads per
proc by setting "sysctl dev.em.X.rx_kthreads"  (I think)

Scott

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Commercial support available - https://portal.pfsense.org


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Commercial support available - https://portal.pfsense.org


Reply via email to