Performance tuning hints of gigabit networking?

2003-02-26 Thread CHOI Junho
Hi, I am looking for a good resource for kernel tuning on very high bandwidth HTTP servers(avg 500Mbit/sec, peak 950Mbit/sec). Today I faced very unusual situation with 950Mbit/sec bandwidth! netstat -m 16962/93488/262144 mbufs in use (current/peak/max): 16962 mbufs allocated to data

Re: Performance tuning hints of gigabit networking?

2003-02-26 Thread CHOI Junho
Average number of connection is very high. Almost all client is DSL/Cable users pulling some files using HTTP(only static files). Here is thttpd output of one hour at peak: thttpd[]: up 50401 seconds, stats for 3240 seconds thttpd[]: thttpd - 147617 connections (45.5608/sec), 5900 max

Re: Performance tuning hints of gigabit networking?

2003-02-26 Thread CHOI Junho
2. I noticed other parameters for network tuning is not-so-important in this time. Most problem comes with low kern.ipc.nmbclusters -- I failed to set it over 65536. 3. Usually thttpd use mmap() for caching contents in memory. Our service file(only static files) varies from 10k ~

Re: Checksum offload support for Intel 82550/82551

2003-02-26 Thread Attila Nagy
Hello, I can't easily drop everything and slap together a test setup with exactly the right software and hardware I need to debug everyone's particular problem. That's clear. (This bug only occurs in -CURRENT as of 30 seconds ago and on an UltraSPARC 10 with 16 if_dc interfaces and I need

Re: Checksum offload support for Intel 82550/82551

2003-02-26 Thread Bill Paul
I think my problem is not that hard. This bug only occurs when you are using CURRENT (or 5.0 RELEASE) from (I think) the point where BPF changed. Actually, there's something else than changed. The ng_fec module tries to set ifp-if_output to ng_fec_output() so that it can do some output