I'm looking at producing a netgraph node that is going to be
potentially very hard on kernel memory.  The node may have to manage
as many as 10K netgraph hook connections (each one requires a small
amount of memory) and access to the hooks requires that they be in a
table (not a linked list).

This means that I'm forced to make either a rather large static
allocation or making reallocations as required.  I'd like to poll the
collected wizdom of various strategies.

In this case, the actual usage is hard to predetermine and can range
from small (a few entries) to very large (a few thousand entries).

Is it bad, in the kernel context, to allocate a moderate amount and
then increase one's allocation a la realloc(3)?

Dave.

-- 
============================================================================
|David Gilbert, Velocet Communications.       | Two things can only be     |
|Mail:       [EMAIL PROTECTED]             |  equal if and only if they |
|http://www.velocet.net/~dgilbert             |   are precisely opposite.  |
=========================================================GLO================


To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-current" in the body of the message

Reply via email to