On Thu, Jul 12, 2001 at 09:27:54PM -0500, Mike Silbersack wrote:
> I'd like to do this also, provided that we also change the mbuf to cluster
> ratio from 4/1 to 2/1. This will ensure that the doubled per-socket
> memory usage doesn't cause systems to run out of clusters earlier than
> before.
This is sort of backwards. Today we have (kern/uipc_mbuf.c):
#ifndef NMBCLUSTERS
#define NMBCLUSTERS (512 + MAXUSERS * 16)
#endif
TUNABLE_INT_DECL("kern.ipc.nmbclusters", NMBCLUSTERS, nmbclusters);
TUNABLE_INT_DECL("kern.ipc.nmbufs", NMBCLUSTERS * 4, nmbufs);
What you actually want to do is double the number of clusters:
#define NMBCLUSTERS (512 + MAXUSERS * 32)
And then do half as many mbuf's per cluster:
TUNABLE_INT_DECL("kern.ipc.nmbufs", NMBCLUSTERS * 2, nmbufs);
I think. Here's a sample from a system I run (netstat -m):
151/5024/18432 mbufs in use (current/peak/max):
128/4608/4608 mbuf clusters in use (current/peak/max)
As you can see, clusters peaked, while mbuf's were only 1/3 used.
I want to see some data points from other types of servers before
saying this really is a good idea. That said, so far every system
I've checked runs out of clusters before mbuf's.
Can some other people check systems in various forms of use?
--
Leo Bicknell - [EMAIL PROTECTED]
Systems Engineer - Internetworking Engineer - CCIE 3440
Read TMBG List - [EMAIL PROTECTED], www.tmbg.org
To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message