[ns] A STRANGE classifier problem: is it only designed to work for routing agent??

2006-06-07 Thread Wei Gao

Hi,

I encountered a very strange classifier problem when I want to add a new
transport layer agent to the mobile nodes.

In the Node:attach method in the ns-node.tcl, the dmux_ member is declared
to be Classifier/Port. However, when I trace in the program, I found that
none Classifier/Port methods are called. Instead, all methods called are in
the Classifier class.

As a result, when the classifier receives an incoming packet,
Classifier::recv() and Classifier::find() will be called. The thing is, here
Classifier::classify() is using offset_!! The return result of this function
is always -1! The program will automatically transfer the packet to the
routing agent if it is -1, so the thing is no matter what the packet is, it
will go to the routing agent. Classifier actually is completely unused!!

I tried to modify the Classifier::classify() method to make sure it classify
the packets according to their dport(). However, even in the
Classifier::install(), I write some Nsobject* p into some slot ( it is
stored in the slot_ member). When the recv() is called, such slot_ will
automatically be cleared to 0, except the slot_[0], which is reserved for
NullAgent. I cannot find out where it is cleared, so it is SO strange.

This problem puzzled me for days, I am not sure whether this is an original
bug of this classifier class.

I am eager to get some possible help. I appreciate any possible help.



Thanks a lot

Wei


[ns] R: About the multihome in SCTP!

2006-06-07 Thread Marco Fiore

SCTP is not designed to work with wireless links.
You should add 
wireless support yourself, by extending
the multihome-attach-agent and 
multihome-add-destination
methods in tcl/lib/ns-lib.tcl. If you look 
there, you will see that
only wired liks are supported, and you should 
add wireless
links support as well.

Regards,

Marco Fiore

 
Messaggio originale
Da: [EMAIL PROTECTED]
Data: 7-giu-2006 
11.07 AM
A: ns-users@ISI.EDU
Ogg: [ns] About the multihome in SCTP!


Hi,

I want to know how to configure a SCTP node in NS2.29,the 
type of address is
hierachical.I simulate the code in WLAN,there are 
some other
nodes:CN,Router,BS.The MN node in my simulation is made up 
of three nodes,
they are configured the same attribute.Please give me 
some advice 



[ns] two equal UDP CBR flows in droptail queue get different bandwidths?

2006-06-07 Thread Eduardo J. Ortega

Hi there:

I've set up this experiment. I have two source nodes S1 and S2 directly 
connected to a node R1 and two destination nodes D1 and D2 also directly 
connected to a node R2. Nodes R1 and R2 are connected. All links are 1 Mb/s 
Full duplex with DropTail. Now, here's the thing. I set up two flows, one 
going from S1 to D1 and the other one form S2 to D2. Both flows are UDP CBR 1 
Mb/s. Flow 1 starts at t=0 and finishes at t=20.  flow 2 starts at t=10 and 
stops at t=15. Sim runs from t=0 to t=25.

I'd expect that at t=10 (when flow 2 starts), both flows would experience the 
same amount of packet losses, so that each one would use about 0.5Mb/s of the 
link between R1 and R2. But what really happens is that from t=10 to t=15, 
flow 2 uses all bandwidth while flow 1 loses all packets. Since both flows 
have the same parameters, shouldn't they receive the same share of bandwidth 
during that period? Or am i missing something here?

Thanks in advance.

-- 
Eduardo J. Ortega - Linux user #222873 
No fake - I'm a big fan of konqueror, and I use it for everything. -- Linus 
Torvalds



Re: [ns] two equal UDP CBR flows in droptail queue get different bandwidths?

2006-06-07 Thread Tyler Ross

This phenomenon is explained in the tutorial in Marc Greis's tutorial on 
the ns-2 website (see 
http://www.isi.edu/nsnam/ns/tutorial/nsscript2.html ).  The queue that 
you're probably using is a DropTail.  The DropTail queue has no concept 
of fairness, so it's going to drop whatever packet happens to be there.

If you want some kind of fairness, you can do as the tutorial suggests, 
and replace DropTail with SFQ in your simulation.  You will then get a 
fairer split of the bandwidth.  If you monitor the queue and the dropped 
packets, you will indeed see that they are queued and dropped in a much 
more equal way.

Eduardo J. Ortega wrote:
 Hi there:
 
 I've set up this experiment. I have two source nodes S1 and S2 directly 
 connected to a node R1 and two destination nodes D1 and D2 also directly 
 connected to a node R2. Nodes R1 and R2 are connected. All links are 1 Mb/s 
 Full duplex with DropTail. Now, here's the thing. I set up two flows, one 
 going from S1 to D1 and the other one form S2 to D2. Both flows are UDP CBR 1 
 Mb/s. Flow 1 starts at t=0 and finishes at t=20.  flow 2 starts at t=10 and 
 stops at t=15. Sim runs from t=0 to t=25.
 
 I'd expect that at t=10 (when flow 2 starts), both flows would experience the 
 same amount of packet losses, so that each one would use about 0.5Mb/s of the 
 link between R1 and R2. But what really happens is that from t=10 to t=15, 
 flow 2 uses all bandwidth while flow 1 loses all packets. Since both flows 
 have the same parameters, shouldn't they receive the same share of bandwidth 
 during that period? Or am i missing something here?
 
 Thanks in advance.
 



[ns] Sending Pings without connecting agents together

2006-06-07 Thread Saeed B

Dear All,
Hi,

I saw an email in ns-users mailing list (
http://mailman.isi.edu/pipermail/ns-users/2001-May/014861.html )
in which someone asked for the modified version of Marc Greis's Ping code
that could send pings simply using a code like:
'$pa send $node'
instead of making two instances of ping agent and connecting them together.
( Greis's code is available at:
http://www.isi.edu/nsnam/ns/tutorial/nsnew.html )

Does anyone know the answer? I need it as part of my thesis, and this will
help so much.
So, if you know how to do such a kind of task, it will be very kind of you
letting me know too.

Thanks in advance,

--Saeed.


Re: [ns] two equal UDP CBR flows in droptail queue get different bandwidths?

2006-06-07 Thread Eduardo J. Ortega

I understand that Droptail knows nothing about fairness. But, on the average, 
and given the fact that both flows have exactly the same characteristics, 
shouldn't they experience the same average behaviour?
thanks.


On Wednesday 07 June 2006 08:35, Tyler Ross wrote:
 This phenomenon is explained in the tutorial in Marc Greis's tutorial on
 the ns-2 website (see
 http://www.isi.edu/nsnam/ns/tutorial/nsscript2.html ).  The queue that
 you're probably using is a DropTail.  The DropTail queue has no concept
 of fairness, so it's going to drop whatever packet happens to be there.

 If you want some kind of fairness, you can do as the tutorial suggests,
 and replace DropTail with SFQ in your simulation.  You will then get a
 fairer split of the bandwidth.  If you monitor the queue and the dropped
 packets, you will indeed see that they are queued and dropped in a much
 more equal way.

 Eduardo J. Ortega wrote:
  Hi there:
 
  I've set up this experiment. I have two source nodes S1 and S2 directly
  connected to a node R1 and two destination nodes D1 and D2 also directly
  connected to a node R2. Nodes R1 and R2 are connected. All links are 1
  Mb/s Full duplex with DropTail. Now, here's the thing. I set up two
  flows, one going from S1 to D1 and the other one form S2 to D2. Both
  flows are UDP CBR 1 Mb/s. Flow 1 starts at t=0 and finishes at t=20. 
  flow 2 starts at t=10 and stops at t=15. Sim runs from t=0 to t=25.
 
  I'd expect that at t=10 (when flow 2 starts), both flows would experience
  the same amount of packet losses, so that each one would use about
  0.5Mb/s of the link between R1 and R2. But what really happens is that
  from t=10 to t=15, flow 2 uses all bandwidth while flow 1 loses all
  packets. Since both flows have the same parameters, shouldn't they
  receive the same share of bandwidth during that period? Or am i missing
  something here?
 
  Thanks in advance.

-- 
Eduardo J. Ortega - Linux user #222873 
No fake - I'm a big fan of konqueror, and I use it for everything. -- Linus 
Torvalds