I have a question regarding the round-trip propagation delay on an Ethernet network.

Page 123 of the Cisco Press "Designing Cisco Networks" book states:

"The most significant design rule for Ethernet is that the round-trip propagation 
delay in one collision domain must not exceed 512 bit times, which is a requirement 
for collision detection to work correctly."

With 100Mbps Ethernet, the maximum round-trip delay would be 5.12 seconds, resulting 
in a distance limitation of 205 meters.

I currently oversee a large flat network covering several miles in diameter.  All of 
the links between buildings are single-mode fiber links.  No routing is involved, 
everything is switched - one large broadcast domain.

How does the 512 bit time rule apply to fiber optic cabling?  I see on page 127 of the 
same book that the Round trip delay in bit times per meter for Cat5 cable is 1.112, 
whereas Fiber-optic cable it's 1.0.

I guess I'm having difficulty understanding how fiber can overcome the 512 bit-time 
rule and can have a much longer distance.

I do realize that this is not exactly a Cisco question, though covered on the DCN/CCDA 
material.  If someone could kindly refer me to any material that covers this topic, 
I'd appreciate it.

___________________________________
UPDATED Posting Guidelines: http://www.groupstudy.com/list/guide.html
FAQ, list archives, and subscription info: http://www.groupstudy.com
Report misconduct and Nondisclosure violations to [EMAIL PROTECTED]

Reply via email to