I have noticed a strange occurrence in the behavior of TCP westwood when
using a wireless simulation.  
 
Here is the scenario:  If I have one westwood FTP flow application running
from node 0, trying to send it to node 2, and node 2 is too far away (so it
hops through node 1) the measured RTT_MIN seen by westwood is 20,000 and
STAYS at that rate throughout the entire simulation (no new RTT_MIN is
calculated).  Note that the INIT_RTT is set to 20,000 (not sure if this is
right either, might be a bug).
 
 
If I set node 2 close enough to node 0, then Westwood updates its MIN_RTT to
something more appropriate right away (generally 24 or something).
 
Also, when I run TWO westwood simulations, for some reason it works in both
cases.  So my question is, why in the scenario where one Westwood flow hops
to its destination does the MIN_RTT get set to some insanely high number and
stay there?
 
-----------------------------------------------
Arya Afrashteh
Virginia Tech - CpE Grad Student
[EMAIL PROTECTED]
-----------------------------------------------
 

Reply via email to