Hi all:

I have set up an experiment for TCP but I am puzzled by its results,
so I thought maybe you can shed some light on it:

I have linear network with 4 nodes.   1 ----- 2 ----- 3 ------ 4
Link 1-2 and Link 3-4 are full duplex 100 Mb 1ms
Link 2 -3 is full duplex 1Mb 1ms.
I set up an FTP application sending from an Agent/TCP/Newreno in node
1 to an Agent/TCPSink in node 4. There's no other traffic.

I would expect TCP to increase the transmisison window, and therefore,
the throughput, until the queue in the link from 2 to 3 became full.
Then, TCP should lose a packet and reduce the window. If I plot the
cwnd_ of my TCP agent  as a function of time, I see that indeed it
keeps increasing until the simulation ends. However, the throughput
reaches almost from the beginning a value close to the link capacity
of 1 Mb, and the flow doesn't lose any packets (the queue does not get
full). Is this caused by something specific of the TCP agent flow
control, or am I missing something here?

Thanks,

-- 
Eduardo J. Ortega U.

Reply via email to