I've set up this: node SRC1 connected to router R1, which is in turn connected 
to router R2 which is, in turn, connected to DST1. link from SRC1 to R1 is 10 
Mb/s with delay=1ms. link between R2 and DST1 has the same parameters. Link 
between R1 and R2 is 1 Mb/s with delay=10ms. Then i start a TCP FTP 
transmission from SRC1 to DST1, pkt size=1500 B. If i understand TCP 
correctly, each packet sent from SRC1 to DST1 is larger and larger (hence 
using more bandwidth) until it overcomes the network capacity. At that 
moment, there will be a timeout in ACK packets from DST1 to SRC1 and sender 
should start sending smaller packets, hence using less bandwidth. So, when i 
run my sim, i expect to see channel capacity usage between R1 and R2 grow 
fast until it reaches 1Mb/s, then go down, and start speeding up again. 
However, what i get is almost constant usage of 1Mb/s at all times. Maybe i 
have a misconception?

I'm using Agent/TCP and Application/FTP in SRC1 and Agent/TCPSink in DST1.

Thanks in advance.

Eduardo J. Ortega - Linux user #222873 
"No fake - I'm a big fan of konqueror, and I use it for everything." -- Linus 

Reply via email to