On Tue, 22 Oct 2002, Bruce Evans wrote:
This return of 0 is apparently intentional. The comment before this
says:
Here the However clause is not in RELENG_4. Returning 0 gets the
status updated. I think this is just too expensive to do every second.
Autonegotiation is only retried every
Hello,
Could someone please explain to me how to setup a channel-bond between
two ethernet cards?
http://people.freebsd.org/~paul/FEC/
From there, you can fetch the needed source. You have to copy it into your
source tree, compile and install.
After you have the ng_fec module in /modules, see
Applying patches to mpd does not fix bugs in MS software ;-)
In MS software (Win XP Pro SP1 at least) there is the option
'negotiate multi-link for single link connections'.
Unfortunately it can't handle it, when succesfully negotiated.
One can verify it at (page to frame 82)
After taking a close look at tcp_input, I think I see a senario where this
could happen. Say header prediction handles ~2 GB of data without
problems, then a retransmission happens. snd_wnd starts collapsing as it
should. The header prediction code is correctly skipped as the snd_wnd no
From: Kevin Stevens [mailto:Kevin_Stevens;pursued-with.net]
Any suggestions for how one would start debugging this to
find out where its stuck, and how?
At a guess, you need to tune the state-table retention time down.
If by that you mean the MSL? I've set the MSL to 5000 in this case.
Until the multi-threaded kernel becomes stable...
(I'm using 4.6)
Is there a quick dirty way of sharing mbufs with a user process?
The objective is to get the advantages of multi-processors yet avoid the
syscall overhead for each packet.
E.g., a netgraph node could queue mbufs in memory shared
On Wed, 23 Oct 2002, Charlie Root wrote:
The fxp is plugged into an SMC Tigerswitch. The SMC is configured to
pass VLAN's 5, 10 and 20.
Can anyone suggest some tests I might try or further reading?
Try SMC Tigerswitch printed manual. Some of the Tigerswitch series models
cannot handle
Hi,
I debugged this a bit further and figured out what the problem is.
The ISC dhcpd uses a bpf to listen on an interface. When a broadcast
packet (e.g. DHCP request) comes in on one interface, the bridging code
will correctly forward it out all the other interfaces in the cluster,
and also
From: Don Bowman
I have an application listening on an ipfw 'fwd' rule.
I'm sending ~3K new sessions per second to it. It
has to turn around and issue some of these out as
a proxy, in response to which some of them the destination
host won't exist.
For reference, the solution is