Hi Gert, You response appreciated. One fatal assumption though is me only forcing one end of the link - where did that come from? Read back over my post, keeping in mind that I force both ends to 100/full. By renegotiation I meant each end changing state. Yes I chose a bad word.
Do you have a view on what causes an end (or both ends) of a link to all of a sudden change state and think the other end is not capable of 100/full? That is what I am trying to understand. I am also referring to a link that is not erroring like mad and the occurrences might be as low as once per week. Cheers Heath On 20 August 2010 09:48, Gert Doering <g...@greenie.muc.de> wrote: > Hi, > > On Fri, Aug 20, 2010 at 07:33:14AM +0100, Heath Jones wrote: > > I'm really curious as to why there are many people here saying forcing > ports > > is a bad thing though. I was pretty surprised to be reading that > actually, > > its good to have another perspective on the idea. > > If you force one end, and the other end is set to auto, what usually > happens > is that the "auto" end will not see autoneg happening - and has to assume > "I'm connected to a *Hub*, so I must do half-duplex now". > > So now you have one side full-duplex, and the other side half-duplex, which > implies "collision detection and avoidance". > > Now, the half-duplex side sends out a packet. Everything fine. The > full-duplex side also wants to send out a packet, some us later - and it > will just go ahead and do it, since it does not have to wait for the > incoming packet to be finished (full-duplex!). > > Now, on the half-duplex end, the device notices "oh, something comes in > while I am sending a packet, so this MUST BE A COLLISION" - and drops(!) > both the outgoing and incoming packet. > > Boom, you have packet loss. And the worst thing about this is that the > packet loss is fairly hard to diagnose - if the link is not carrying > background traffic, diagnosing with a single "ping" stream will always > have "packet goes out, reply comes back. new packet goes out, reply > comes back", but there will never be two packets on the link at the > same time -> no collision. > > So you assume "everything fine", put the link into production use, and > your customers complain about "internet is slow". > > > > I've seen countless issues where inter switch links, inter router links > and > > also links between servers and switches have cause so many issues. On > almost > > all of these occasions, forcing will solve the problem. > > Well, of course someone will still stick to the old lore... :-) > > > The link is actually going down while the renegotiation happens. This > causes > > a L2 topology change, so frames will be dropped. In a service provider > > environment, there will be a L3 topology change - IGP does its thing and > > this may take some time (especially on a heavily loaded router). The end > > result is customers start calling wondering where their traffic went. > > Huh? There is no "renegotiation" going on in nway autoneg. > > > It sounds like this is a matter of opinion and the opinion depends on the > > environment in which it is being applied, no ?? > > Well. "Practical experience" tends to form opinion, yes. > > > > I'll be honest here, I've never truely understood the cause of speed > duplex > > mismatches. > > See above. People blindly forcing one side and not ensuring that the > other side is also forced (possibly because that side is configured by > a different group, or whatever). Or one side of the link getting exchanged > years later, and the new device defaulting to autoneg, etc. > > > Noise would be the obvious one, but does noise actually play a > > big part on relatively short cat5 links? Dodgy connectors? Problems with > the > > PLL decoder getting out of sync (noise again?)? Faulty clock?? Someone > > jumping on the cable?? > > People making mistakes is the most common reason, by FAR. > > gert > -- > USENET is *not* the non-clickable part of WWW! > // > www.muc.de/~gert/ > Gert Doering - Munich, Germany > g...@greenie.muc.de > fax: +49-89-35655025 > g...@net.informatik.tu-muenchen.de > _______________________________________________ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/