[ns] Invitation to connect on LinkedIn
LinkedIn Mayur Gaikwad requested to add you as a connection on LinkedIn: -- Rishabh, I'd like to add you to my professional network on LinkedIn. - Mayur Accept invitation from Mayur Gaikwad http://www.linkedin.com/e/362gef-golgxxsy-1j/XYLt_3zkkGuzoc16gPkCk3Z-PrP/blk/I1399552827_3/1BpC5vrmRLoRZcjkkZt5YCpnlOt3RApnhMpmdzgmhxrSNBszYPnPsOe38RdjAVcP59bQhvhR5vhl5xbPARdjcVc3ANcjcLrCBxbOYWrSlI/EML_comm_afe/ View invitation from Mayur Gaikwad http://www.linkedin.com/e/362gef-golgxxsy-1j/XYLt_3zkkGuzoc16gPkCk3Z-PrP/blk/I1399552827_3/3dvdP8UczkRejAPckALqnpPbOYWrSlI/svi/ -- DID YOU KNOW that LinkedIn can find the answers to your most difficult questions? Post those vexing questions on LinkedIn Answers to tap into the knowledge of the world's foremost business experts: http://www.linkedin.com/e/362gef-golgxxsy-1j/ask/inv-23/ -- (c) 2011, LinkedIn Corporation
Re: [ns] 'Handoff Attempted' message
See following link, where it was already reported as a bug, because there is no point of doing a Handoff in a NON-INFRASTRUCTURAL environment. It includes a simple solution also. (Note: The asterisks (*) in the snippet of the code are extra, ignore them...) http://mailman.isi.edu/pipermail/ns-users/2008-December/064100.html Mayur Hi, I was wondering if anyone had any ideas what the message 'Client X: Handoff Attempted' means, which is occasionally produced when running simulations on ad hoc routing protocols (e.g. AODV, DSDV). I've looked around the list and on internet sites and found the question asked many times, but they never seem to have been answered. I'm assuming this message is just informative and that no error has occurred, but it would be nice to know what it is referring too. I've tried looking in the ns-2 source code to figure out where it is generated but haven't been able to find out so far. Any suggestions welcome. Cheers, Tim
Re: [ns] is there any one can tell me something about TCP tracefileanalysis without using some software
In the script (e.g. a Perl file) that analyzes the trace file with UDP, you need just change the word `udp' with `tcp'. I suppose this is what you want, OR your question is not understood well. Mayur Hello All, I have searched the internet for quite some time, however, I always find that the tracefile analysis is almost always target at UDP, not TCP. I want to know if any one can tell me how to analyze the TCP part (right now, I only searched out that many people can finish the thorough part of TCP) Thank you for your help. Sincerely.
Re: [ns] Set duration value
What do you mean by SPECIFIC duration value? It is calculated as per the 802.11 std and filled in the duration field of the header. See Mac-802_11.cc file for details. Mayur Hello everyone Can anyone tell me how can I set a specific duration value for request-to-send (RTS) and clear-to-send (CTS) frames in NS2? Thanks in advance
Re: [ns] What does the error Segmentation Fault means and how diserror can be rectified.......
`segmentation fault' is an error related to improper memory usage (e.g. accessing a location out side the allocated data segment for the process). It can best be analyzed by ddd (with gdb) and valgrind kind of tools. Mayur H ns-usersIf any of you guys know what does the error segmentation fault means and how this error can be solved, please send me the solution and links if you have any regarding thisI am trying to simulate a scenario for a 650 seconds...After simulating for 16 seconds , I have a error saying Segmentation Fault. I am getting a trace file but only for that 16 seconds.please help me regarding this so that i can get the trace file completely for my simulation time of 650 seconds.Any help is greatly appreciated. Thanks in advance.
Re: [ns] carrier sense
NAV is not a table. For a particular node its NAV is a simple variable used to store a number which is the duration (in microseconds) for which it freezes its backoff downcounting, it it is ON. You can know this value by accessing the nav_ (a double type of variable of Mac802_11 class in mac/mac-802_11.h). Hope this helps Mayur hi all it's true the node uses the carrier sense before sending the data. as per my understanding the NAV for the nodes in the carrier sense rage is updated. Nodes uses this nav to determine the time after which the it is permited to send the data. my promblem is to use the entries in nav for my work.how can i access the nav table. Neeraj Gupta Assistant Professor HOD, CSE IT Department Hindu College of Engineering Sonepat
Re: [ns] How to reduce execution time of scenario generation (using setdest)
Thanks a lot Sriram!. It really cut the execution time drastically from 2.8 minutes to 0 (negligibly small number of) seconds. Actually, I earlier avoided the God related statements, but it did not help more. After I commented floyd_warshall and neighbour related functions, it has reduced drastically. regards, Mayur this is sriram from iit kgp again in scenario gen ie setdest.cc you can delete unwanted portions which u may not use and this shuld reduce scenario generation time. like for my application i did not require functions like flloyd-warshall and the neighbour functions , god functions etc. remove them and ur scenario file generation shuld be faster. in the scenario file u basically require only lines for initial postion and setdest (set destination) neighbour reachable etc are discovered by the routing protocol themselves -- B Sriram India mob : 09733706981 Dear Mayur, Setdest has many nested for loops in which it is computing neighborhood of nodes, You remove all those nested loops as well. The actual code for generating random movement is not more that 50 lines. All other code is of god only. Previously when I worked with it, I did this optimization and it scalled well even for 1000 nodes(it generated the mobility scenario in less than 1 minute) I think this helps. regards, Manish On Sat, Feb 28, 2009 at 2:25 PM, Mayur Mansukhlal Vegad ma...@ee.iitd.ac.in wrote: I had already tried it by commenting the show_diffs() method call statement. But it didnot help. The only four statements printing God related stuff are those containing GOD_FORMAT and GOD_FORMAT2, which are inside the show_diffs() function. I commented that calling stmt. And the scenario file had no GOD stmts then. But still it takes similar amount of time. BTW thanks for your considerations! Mayur If you look at the code, setdest is spending most of the time in generating god entries. so if you are not interested in them, you can comment out the code which calculates and generates god entries which will reduce the execution time significantly. regards, Manish On Sat, Feb 28, 2009 at 1:06 AM, Mayur Mansukhlal Vegad ma...@ee.iitd.ac.in wrote: Dear all, scenario generation takes around 2.5 to 3 minutes for large number (eg 100) of nodes. It is too much. After that the ns simulation takes another minute or more. Could anybody guide how to reduce the execution time particularly of the setdest tool? with regards, Mayur
[ns] How to reduce execution time of scenario generation (using setdest)
Dear all, scenario generation takes around 2.5 to 3 minutes for large number (eg 100) of nodes. It is too much. After that the ns simulation takes another minute or more. Could anybody guide how to reduce the execution time particularly of the setdest tool? with regards, Mayur
Re: [ns] Fw: trace file
Better you refer Ch. 16 of ns-manual for good understanding. Eng Rony hi all i run simple-wireless.tcl example from Marc Greis' tutorial section IX. Running Wireless Simulations in ns and i want to know what each field in o/p trace file means, for example: s 0.029290548 _1_ RTR --- message 32 [0 0] --- [1:255 -1:255 32 0] M 10.0 (5.00, 2.00, 0.00), (20.00, 18.00), 1.00 s 10.0 _0_ AGT --- 2 tcp 40 [0 0] --- [0:0 1:0 32 0] [0 0] 0 r 10.0 _0_ RTR --- 2 tcp 40 [0 0] --- [0:0 1:0 32 0] [0 0] 0 ..D 76.430622539 _0_ IFQ ARP 2 tcp 80 [0 800] --- [0:0 1:0 32 1] [0 0] 0 D 76.430622539 _0_ RTR CBK 8 tcp 80 [0 800] --- [0:0 1:0 32 1] [0 0] 0 s 100.337960577 _1_ RTR --- 80 ack 60 [0 0] --- [1:0 0:0 32 0] [28 0] 0 r 100.340096680 _0_ AGT --- 72 ack 60 [13a 1 800] --- [1:0 0:0 32 0] [24 0] 1 0 r 69.501265756 _1_ RTR --- 15 message 32 [0 800] --- [0:255 -1:255 32 0] i want to know also what means by agentTrace,RouterTrace,MacTrace thanks
Re: [ns] Handoff disabling in 802.11
Dear Illidan and Ivan, I also faced the same problem earlier and found a simple solution. Actually, there seems to be a trivial bug as reported (with solution) in http://mailman.isi.edu/pipermail/ns-users/2008-December/064100.html. Unfortunately I got no response from ns-users nor from ns-developers list. Mayur Illidan wrote: Do you use 1 AP or multiple ones? If you're using only one AP, does handover still happens? On Thu, Dec 18, 2008 at 8:23 PM, Ivan_Tiger avtom...@yandex.ru wrote: Hello! Does anyone know how to disable handoff between mobile nodes and AP's in 802.11? (because my networks are independent, no handoff should exist between them). Thanks in advance! -- View this message in context: http://www.nabble.com/Handoff-disabling-in-802.11-tp21071678p21071678.html Sent from the ns-users mailing list archive at Nabble.com.
[ns] A bug in ns-2.33 in mac/mac-802_11.cc:: Handoff is attempted in Ad Hoc (Non Infrastructure) Mode too !!
I guess, there is a bug in the condition for Handoff Attempt in mac/mac-802_11.cc *_Abstract:_* A possible bug in ns 2: *Version : *ns-2.33* *File: ns-2.33/mac/*mac-802_11.cc* *Place: *Condition for Handoff Attempt in RetransmitDATA() function* * * *Brief: *It calls for Handoff even in Non Infrastructure (Ad Hoc) mode * _*Details:*_ Presently the handoff is attempted after 3 number of failures. The snippet is shown below: /*** Current Code Snippet *if (*rcount == 3 handoff == 0)* { //start handoff process printf(Client %d: Handoff Attempted\n,index_); associated = 0; authenticated = 0; handoff = 1; ScanType_ = ACTIVE; sendPROBEREQ(MAC_BROADCAST); return; } As observed during simulation runs, even in case of ad hoc scenarios (without APs) a node attempts for Handoff, viz., it configures a Probe Request and goes to start the Backoff timer. The correction is trivial as shown below. We have to just include the condition for the existence of infra structure mode. /** Modification Suggested ***/ if (*infra_mode_ * *rcount == 3 handoff == 0) { //start handoff process printf(Client %d: Handoff Attempted\n,index_); associated = 0; authenticated = 0; handoff = 1; ScanType_ = ACTIVE; sendPROBEREQ(MAC_BROADCAST); return; } Suggestions and Comments are welcome, I doubt myself, as no body has so far dictated this simple observation. regards, Mayur
[ns] Regarding txtime() in mac-802_11: Why PHY and MAC both data rates are used in txtime calculation?
Dear Network and NS experts, Why MAC and PHY rates should be different for a given packet to be transmitted. I could not understand the calculation of the txtime of a packet in mac-802_11.cc. As observed in the txtime() calculation in mac-802_11.cc, the txtime of any packet is calculated as : t = tp + tm, where tp is calculated using the PLCPDatarate for the PLCP header length and tm is calculated using mac dataRate_ (or basicRate_.) for the MAC Header part. Now the doubt is as follows: The complete frame (PHY+MAC+MSDU) will go holistically or completely only. Then there should be only one data rate which decides the tx time. Why two rates working for each of its own layer? ie. phy data rate for phy part and mac data rate for mac part? I could not understand what happens in reality, and how is it tried to be put here in ns2 simulation. Pl clarify my doubts from both of these angles: Theoretically and from ns Simulation point of view too. regards, Mayur
Re: [ns] Fwd: Problem with mhDefer_ timer in mac-802_11.cc
[EMAIL PROTECTED] wrote: - Forwarded message from [EMAIL PROTECTED] - Date: Fri, 21 Nov 2008 13:54:37 -0600 From: [EMAIL PROTECTED] Reply-To: [EMAIL PROTECTED] Subject: Problem with mhDefer_ timer in mac-802_11.cc To: ns-users@isi.edu Hi all, I am trying to introduce a delay in sending packets from the sendCTS() and sendACK() functions by using the defer timer mhDefer_. However I see that the number of packets sent with and without the use of the mhDefer_ timer is the same. I am not seeing the desired effect. Anyone has a clue why? Also, is this the right approach to introduce a delay before sending a packet? If not could you please let me know what the right procedure is? Any input will be greatly appreciated. Thanks in advance, Varun - End forwarded message - Yes, I think, this should be the right approach if you are interested in introducing Random delay based on the CW, (rather than fixed SIFS) in sending of CTS and ACK packets. In your case, it may be affecting actually. Don't just count the number of packets, rather analyze the trace file and see the time taken between receipt instant of an RTS (DATA) and sending instant of CTS (ACK) at any node, and compare it with the simulation run without your modification. You must get the difference, thats all. This is because if the overall traffic is less, the number of packets may not reduce, for some given time duration, I suppose. If it is giving same SIFS period (fixed) in both the cases, than there is some problem. Then, you may send the snippet of the mac-802_11.cc which you changed to effect this? regards, Mayur
Re: [ns] how listen the packets???
Douglas Restrepo wrote: Hi ns-2 people Im parsing a simulation, and i can see that, when i.e: the node 0 send a packet in broadcast mode, may be 2 or 3 neighbors listen this packet, but not everyone. My question is, what happen whit the others nodes. Why the other nodes don't listen this packet??? what i have to do, to receive this one??? 1 2 3 0 45 6 Thank u As it seems from your topology, all the receivers are within the Rx Range of node 0. So, the only reason for this could be : T*hose other nodes*' MAC is busy either in Tx or in Rx. So, it is logical and realistically correct too. Mayur
[ns] TCP max congestion window and ssthreh's values
Dear ns colleagues, How to set max congestion window in tcp? I ns-default.tcl sets it to 0 (Zero). How it could be if following concept is correct? maxcwnd (Maximum Congestion Window) indicates the maximum number that the CWND can achieve. ie. After tcp enters in the Congestion Avoidance phase, the rate of increase in CWND becomes slow (linear rather than exp). When it reaches MAXCWND, it stops increases. However if some packet loss occurs, during this process, it enters into recovery process, by Reseting the CWND to 1, MAXCWND to half the current CWND. Am I conceptually (theoretically) correct? If yes, then the default value of maxcwnd must be some greater positive number and not zero! (Similar comment is due for ssthresh (Slow Start Threshold ) too, as it is also ZERO by default in ns!) As ns2 is working properly, I am sure I am mistaking / misunderstanding somewhre. I just want to know where. Your expert comments are welcomed.. Mayur
Re: [ns] Doubt Clarified.... TCP max congestion window and ssthreh's values
Mayur wrote: Dear ns colleagues, How to set max congestion window in tcp? I ns-default.tcl sets it to 0 (Zero). How it could be if following concept is correct? maxcwnd (Maximum Congestion Window) indicates the maximum number that the CWND can achieve. ie. After tcp enters in the Congestion Avoidance phase, the rate of increase in CWND becomes slow (linear rather than exp). When it reaches MAXCWND, it stops increases. However if some packet loss occurs, during this process, it enters into recovery process, by Reseting the CWND to 1, MAXCWND to half the current CWND. Am I conceptually (theoretically) correct? If yes, then the default value of maxcwnd must be some greater positive number and not zero! (Similar comment is due for ssthresh (Slow Start Threshold ) too, as it is also ZERO by default in ns!) As ns2 is working properly, I am sure I am mistaking / misunderstanding somewhre. I just want to know where. Your expert comments are welcomed.. Mayur Dear all, I found the clarification to the doubt... The code in tcp.cc is ## // if maxcwnd_ is set (nonzero), make it the cwnd limit if (maxcwnd_ (int(cwnd_) maxcwnd_)) cwnd_ = maxcwnd_; ## So, the comment itself speaks well... By default the algorithm (in ns) does not check for maxcwnd_, ie. cwnd can grow to (theoretically) infinity. If we set it by 'Agent/TCP set maxcwnd_ some_value' then it is set accordingly. I hope I perceived it right. Mayur
[ns] A possible BUG in ns2, in capture() function in mac/mac-802_11.cc
Dear NS experts, There seems to be a BUG in ns2 (till latest version ns2.33) : _*Abstract: *_ capture() function should treat errored_pkts and normal_pkts differently. Viz., the NAV should be set to only EIFS for errored packets and for normal packets to { EIFS + txtime(pkt) }. _*More elaboration :*_ * When a sender is far such that the received power (Pr) is [ RXThresh_ Pr = CSThresh_ ], the packet is sensed but bot understood. As per the 802.11 std, for such sensed_only packet the receiver node should wait for EIFS. In NS2 presently (till ns2.33) this is done so far as this is the only packet being received. (PHY makes the packet ERRORED and MAC sets NAV to EIFS. (see recv_timer() function.) * But when more than one packets are being received, (COLLISION or CAPTURE case) then, there (seems to be) is a bug in the code. viz. The capture() function always sets NAV to (EIFS + txtime(pkt)), without checking if the packet is marked with 'errored' or not. I have tried to rectify the bug (If I am correct). The corrected mac-802_11.cc is attached herewith. [Note: The original and modified versions can be toggled by just commenting/uncommenting the corresponding #define .] As observed I have modified the capture() function to take a second argument for if the pkt is errored and the function acts accordingly. Obviously, the mac-802_11.h needs one line correction for the new declaration of capture(). The results in the two cases (original_capture and modified_capture) are drastically different depending upon the scenario being tested. Your suggestions are welcomed. regards, Mayur
[ns] Sorry Wrong File Sent A possible BUG in ns2, in capture() function in mac/mac-802_11.cc
Dear all, [Sorry, I attached wrong file, Pl find the modified_mac-802_11.cc attached herewith. Kindly discard the earlier sent mac-802_11.cc file.] Mayur wrote: Dear NS experts, There seems to be a BUG in ns2 (till latest version ns2.33) : _*Abstract: *_ capture() function should treat errored_pkts and normal_pkts differently. Viz., the NAV should be set to only EIFS for errored packets and for normal packets to { EIFS + txtime(pkt) }. _*More elaboration :*_ * When a sender is far such that the received power (Pr) is [ RXThresh_ Pr = CSThresh_ ], the packet is sensed but bot understood. As per the 802.11 std, for such sensed_only packet the receiver node should wait for EIFS. In NS2 presently (till ns2.33) this is done so far as this is the only packet being received. (PHY makes the packet ERRORED and MAC sets NAV to EIFS. (see recv_timer() function.) * But when more than one packets are being received, (COLLISION or CAPTURE case) then, there (seems to be) is a bug in the code. viz. The capture() function always sets NAV to (EIFS + txtime(pkt)), without checking if the packet is marked with 'errored' or not. I have tried to rectify the bug (If I am correct). The corrected mac-802_11.cc is attached herewith. [Note: The original and modified versions can be toggled by just commenting/uncommenting the corresponding #define .] As observed I have modified the capture() function to take a second argument for if the pkt is errored and the function acts accordingly. Obviously, the mac-802_11.h needs one line correction for the new declaration of capture(). The results in the two cases (original_capture and modified_capture) are drastically different depending upon the scenario being tested. Your suggestions are welcomed. regards, Mayur
Re: [ns] SplitObject error after adding a simple print function
Asraf wrote: I am trying to print out the route table, and rt_print function in aodv.cc plus this code on in AODV::command: int AODV::command(int argc, const char*const* argv) { ... ... else if (strcasecmp(argv[1], rt_print) == 0) {// print routing tables to a file FILE *fp = fopen(route.txt,a); printf(\nRouting table at node %d\n, addr()); fclose(fp); return TCL_OK; } } and call this in my TCL like this: for {set i 1} {$i $nodes} {incr i} { $ns_ at 10.0 $node_($i) rt_print } But I get an error: ns: _o32 rt_print: (_o32 cmd line 1) invoked from within _o32 cmd rt_print invoked from within catch $self cmd $args ret (procedure _o32 line 2) (SplitObject unknown line 2) invoked from within _o32 rt_print Does anyone know how to solve this SplitObject error. From my understanding, SplitObject error is caused by no having a corresponding C++ object used in TCL. Any pointers?? Or If I want to print out the routing table in AODV, how do I do it? Many Thanks, Asraf. In your TCL script, you have called rt_print as : *$ns_ at 10.0 $node_($i) rt_print* This is as if it is a /procedure of NODE object/, (which is certainly not true here!). Rather you should call it as / ... $val(rp) rt_print/ where, $val(rp) is your AODV, adhocRouting. You can now fine tune yourself as per your need. regards, Mayur
[ns] What is the reason for RTR CBK Drops in cmu-trace?
Dear all, What is theoretical interpretation of the event RTR CBK in cmu-trace. I found that it is abbreviated for DROP_RTR_MAC_CALLBACK in cmu-trace.h. It is certainly a Router layer event, but *what is the meaning of CALLBACK here?* *What is the exact reason of the DROPping of the packet in such trace lines?* One example trace line is shown below: *D* 1844.506605289 _4_ *RTR CBK* 322333 tcp 1060 [13a 5 4 800] --- [4:0 5:0 30 5] [67665 0] 0 0* regards, Mayur
Re: [ns] problem ftp Application
Breno Caetano wrote: hi ns-users, i can treat receive messages when i use a ftp application over agents TCP. i dont find source ftp application, more specifically file ftp.cc. i just find others files and ftp.h. anybody can help me? regards Breno Caetano da Silva Bacharel em Ciências da Computação - UFPI Mestrando em Engenharia Elétrica Escola de Engenharia de São Carlos - EESC Universidade de São Paulo Email: [EMAIL PROTECTED] [EMAIL PROTECTED] Fone: (+5516) 81449079 (+5516) 33738149 Novos endereços, o Yahoo! que você conhece. Crie um email novo com a sua cara @ymail.com ou @rocketmail.com. http://br.new.mail.yahoo.com/addresses Yes, FTP and Telnet applications are implemented in OTcl only, (not in C++), hence no .cc files for them You may read Chapter 39 of ns manual to know further how could you achieve what you intend to do through the TCL script itself. Mayur
Re: [ns] range of mobile nodes
sriram balakrishnan wrote: -- Forwarded message -- From: sriram balakrishnan [EMAIL PROTECTED] Date: Tue, 30 Sep 2008 17:44:04 +0530 Subject: range of mobile nodes To: ns-users@isi.edu hi i would like to know where is the range of the node/mobile node set, for example if i want that i should set the range of my node as 200 meters and anything beyond that is to be sent in multi hops. You have to play with Carrier Sensing Threshold and Receiver Threshold as follows. Phy/WirelessPhy set CSThresh_ value_1 ;#Foe carrier sensing range Phy/WirelessPhy set RXThresh_ value_2 ;# For Receiver Threshold whehre, the value_1/value_2 is the received power level below which the packet will not be sensed/understood respectively. To know which value to use for a particular distance use the ns2dir/indep-utils/propagation/threshold.cc utility. Just compile it and run it to know its usage. It takes distance (mtrs) as input and gives ThresholdPower as the output, which you can use for above values... By the way, the default values are set in ns2dir/tcl/lib/ns-defaults.tcl such that the cs range and rx range are 550 mtrs and 250 mtrs resp. regards, Mayur
[ns] My Misconception: TCP's packet size does not vary with the FTP's packetsize
Mayur wrote: Mats Folke wrote: Hi! It is a fundamental property of TCP to set the packet (segment) size of its own. This is set with respect to path MTU, and various buffers. Thus, the size of the incoming FTP packets will not impact the outgoing segment size, since TCP is a byte-oriented protocol. However, ns-2 is not a perfect realization of the real world. There is a parameter Agent/TCP set packetSize_ 1000 # packet size used by sender (bytes); at http://www.isi.edu/nsnam/ns/doc/node396.html. Perhaps that may help you? Many wishes, Mats Folke Mayur wrote: Tatiana Polishchuk wrote: Did you try to change the header size? Pls refer to the chapter 12.1.2 of the ns2 tutorial. Tatiana On Thu, Sep 25, 2008 at 9:03 AM, Mayur [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote: Dear all, I am using simple FTP over TCP. In ns, from the application layer (ftp) Though I send small packets, (eg. 100 bytes using '$ftp_($i) send 100' ), TCP's packet size remains fixed equal to 1000. What is the reason? How to change it. I tried to study the tcp.cc's send and sendmsg() function, but could not understand it well. Your cooperation is appreciated. regards, Mayur Thank you Tatiana, for your quick response! I went through 12.1.2 of the ns Manual as you suggested. I know that each layer will add a header to the SDU (Service Data Unit) received from its upper layer. And so, there is no point in reducing the header size. My point is different. Consider following explanation. at Application layer the data from FTP is --- |FTP data | --- Now when this is handed over to the TCP layer, it will add its header to it. And it will become following to be handed over to IP layer and so on... -- | TCP Header | FTP Data| -- That's fine. So, the total size, at the TCP layer should be = FTPDataSize + TCPHeaderSize. *Now, the problem I face is: *The TCP packet size is always remaining same (1040 bytes), irrespective of the FTPDataSize, which I expect to vary with it. eg. for FTPData of 200 bytes it should become (200+TCPHeaderSize) bytes. I think theoretically I am correct, and something I miss to set in the ns2. The expert users' suggestions are awaited. I hope my question is now clear regards, Mayur Dear Mats, I understood that TCP is a byte-oriented protocol. So, theoretically a TCP will set the segment size of its own depending upon many factors as path MTU, etc. But consider /only one application layer small 'packet'* */being sent, say of size 200 bytes. (Let's consider this to be small enough to be accomodated within a single TCP segment.) Q.*Now in such a case what will happen in theory?* I know the answer for 'What will happen in ns2 simulation?. It is as follows: In ns2, I observed that for this single packet too, TCP sends a single packet of size 1040 bytes (Rock-Solid-Fixed!!, by the default setting 'Agent/TCP packetSize_ 1000' as you rightly told). I studied as you asked by changing 'Agent/TCP set pacektSize_ someOtherSize'. It works, but it will work *on per flow basis,* while I need it to work on /*per packet (segment) basis*/. In short I want to simulate /*more realistic scenario* in which the segmentSize_ varies (may be, following some random distribution) continuously. / * Is it possible in current ns2?* *If not*, (as it is not a perfect realization of the real network), *what is the way out?* Regards, Mayur Dear all, Sorry to all for the trivial query! It was my silly misunderstanding I have understood it now. A big message (1000) from application layer is split by TCP layer into segments of 1000 each, while keeping the minimum segment size to be 1000 (fixed). Thanks to all for their considerations Mayur
[ns] TCP's packet size does not vary with the FTP's packetsize.
Dear all, I am using simple FTP over TCP. In ns, from the application layer (ftp) Though I send small packets, (eg. 100 bytes using '$ftp_($i) send 100' ), TCP's packet size remains fixed equal to 1000. What is the reason? How to change it. I tried to study the tcp.cc's send and sendmsg() function, but could not understand it well. Your cooperation is appreciated. regards, Mayur
Re: [ns] TCP's packet size does not vary with the FTP's packetsize.
Tatiana Polishchuk wrote: Did you try to change the header size? Pls refer to the chapter 12.1.2 of the ns2 tutorial. Tatiana On Thu, Sep 25, 2008 at 9:03 AM, Mayur [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote: Dear all, I am using simple FTP over TCP. In ns, from the application layer (ftp) Though I send small packets, (eg. 100 bytes using '$ftp_($i) send 100' ), TCP's packet size remains fixed equal to 1000. What is the reason? How to change it. I tried to study the tcp.cc's send and sendmsg() function, but could not understand it well. Your cooperation is appreciated. regards, Mayur Thank you Tatiana, for your quick response! I went through 12.1.2 of the ns Manual as you suggested. I know that each layer will add a header to the SDU (Service Data Unit) received from its upper layer. And so, there is no point in reducing the header size. My point is different. Consider following explanation. at Application layer the data from FTP is --- |FTP data | --- Now when this is handed over to the TCP layer, it will add its header to it. And it will become following to be handed over to IP layer and so on... -- | TCP Header | FTP Data| -- That's fine. So, the total size, at the TCP layer should be = FTPDataSize + TCPHeaderSize. *Now, the problem I face is: *The TCP packet size is always remaining same (1040 bytes), irrespective of the FTPDataSize, which I expect to vary with it. eg. for FTPData of 200 bytes it should become (200+TCPHeaderSize) bytes. I think theoretically I am correct, and something I miss to set in the ns2. The expert users' suggestions are awaited. I hope my question is now clear regards, Mayur
Re: [ns] TCP's packet size does not vary with the FTP's packetsize.
Mubashir Rehmani wrote: Hello I hope that you are fine. I think that the TCP Header length will remain same: For instance if you are sending FTP Data = 300 then Total size = FTP Data + TCP Header FTP Data = 200 TCP Header = 1024 Total size = 200 + 1024 FTP Data = 300 TCP Header = 1024 Total size = 300 + 1024 Hope i am right? Regards 2008/9/25 Mayur [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] Tatiana Polishchuk wrote: Did you try to change the header size? Pls refer to the chapter 12.1.2 of the ns2 tutorial. Tatiana On Thu, Sep 25, 2008 at 9:03 AM, Mayur [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] mailto:[EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote: Dear all, I am using simple FTP over TCP. In ns, from the application layer (ftp) Though I send small packets, (eg. 100 bytes using '$ftp_($i) send 100' ), TCP's packet size remains fixed equal to 1000. What is the reason? How to change it. I tried to study the tcp.cc's send and sendmsg() function, but could not understand it well. Your cooperation is appreciated. regards, Mayur Thank you Tatiana, for your quick response! I went through 12.1.2 of the ns Manual as you suggested. I know that each layer will add a header to the SDU (Service Data Unit) received from its upper layer. And so, there is no point in reducing the header size. My point is different. Consider following explanation. at Application layer the data from FTP is --- |FTP data | --- Now when this is handed over to the TCP layer, it will add its header to it. And it will become following to be handed over to IP layer and so on... -- | TCP Header | FTP Data| -- That's fine. So, the total size, at the TCP layer should be = FTPDataSize + TCPHeaderSize. *Now, the problem I face is: *The TCP packet size is always remaining same (1040 bytes), irrespective of the FTPDataSize, which I expect to vary with it. eg. for FTPData of 200 bytes it should become (200+TCPHeaderSize) bytes. I think theoretically I am correct, and something I miss to set in the ns2. The expert users' suggestions are awaited. I hope my question is now clear regards, Mayur -- Mubashir Husain Rehmani Mobile : 00 33 (0)6 32 00 89 35 Thanks Mubashir,/ Is TCP Header that much (1024 bytes) long???/ Even if it is so, it is not doing what we expect...Consider following observations: In the CMU-TRACE, I observe that the packet size when it leaves the TCP layer and received by IP layer (ie. the AGT sent lines in the cmu-trace) is always fixed to 1040 bytes. Similarly, those RTR sent lines are always 1060 bytes. That shows that the routing (IP) layer has added 20 bytes (which is standard IP header size). And lastly the MAC sent lines show 1112 bytes (with MAC added 52 bytes) of the packet size for the same packet travelled from AGT to this MAC layer. Now, these all figures (1040, 1060 and 1112 bytes remains rock-solid-fixed. They don't vary with the FTP data size. However, they do change when we change the TCP packetSize_ directly. eg. when I do '*$tcp1 set packetSize_ 200*' the above figures change to 240, 260 and 312 bytes respectively. So, *in nut-shell the TCP packet size vary with directly setting the packetSize_ variable and not in accordance with the upper layer's data unit. So, I think theoretically, it is incorrect*. Or I miss something.
Re: [ns] TCP's packet size does not vary with the FTP's packetsize.
Mats Folke wrote: Hi! It is a fundamental property of TCP to set the packet (segment) size of its own. This is set with respect to path MTU, and various buffers. Thus, the size of the incoming FTP packets will not impact the outgoing segment size, since TCP is a byte-oriented protocol. However, ns-2 is not a perfect realization of the real world. There is a parameter Agent/TCP set packetSize_ 1000 # packet size used by sender (bytes); at http://www.isi.edu/nsnam/ns/doc/node396.html. Perhaps that may help you? Many wishes, Mats Folke Mayur wrote: Tatiana Polishchuk wrote: Did you try to change the header size? Pls refer to the chapter 12.1.2 of the ns2 tutorial. Tatiana On Thu, Sep 25, 2008 at 9:03 AM, Mayur [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote: Dear all, I am using simple FTP over TCP. In ns, from the application layer (ftp) Though I send small packets, (eg. 100 bytes using '$ftp_($i) send 100' ), TCP's packet size remains fixed equal to 1000. What is the reason? How to change it. I tried to study the tcp.cc's send and sendmsg() function, but could not understand it well. Your cooperation is appreciated. regards, Mayur Thank you Tatiana, for your quick response! I went through 12.1.2 of the ns Manual as you suggested. I know that each layer will add a header to the SDU (Service Data Unit) received from its upper layer. And so, there is no point in reducing the header size. My point is different. Consider following explanation. at Application layer the data from FTP is --- |FTP data | --- Now when this is handed over to the TCP layer, it will add its header to it. And it will become following to be handed over to IP layer and so on... -- | TCP Header | FTP Data| -- That's fine. So, the total size, at the TCP layer should be = FTPDataSize + TCPHeaderSize. *Now, the problem I face is: *The TCP packet size is always remaining same (1040 bytes), irrespective of the FTPDataSize, which I expect to vary with it. eg. for FTPData of 200 bytes it should become (200+TCPHeaderSize) bytes. I think theoretically I am correct, and something I miss to set in the ns2. The expert users' suggestions are awaited. I hope my question is now clear regards, Mayur Dear Mats, I understood that TCP is a byte-oriented protocol. So, theoretically a TCP will set the segment size of its own depending upon many factors as path MTU, etc. But consider /only one application layer small 'packet'* */being sent, say of size 200 bytes. (Let's consider this to be small enough to be accomodated within a single TCP segment.) Q.*Now in such a case what will happen in theory?* I know the answer for 'What will happen in ns2 simulation?. It is as follows: In ns2, I observed that for this single packet too, TCP sends a single packet of size 1040 bytes (Rock-Solid-Fixed!!, by the default setting 'Agent/TCP packetSize_ 1000' as you rightly told). I studied as you asked by changing 'Agent/TCP set pacektSize_ someOtherSize'. It works, but it will work *on per flow basis,* while I need it to work on /*per packet (segment) basis*/. In short I want to simulate /*more realistic scenario* in which the segmentSize_ varies (may be, following some random distribution) continuously. / * Is it possible in current ns2?* *If not*, (as it is not a perfect realization of the real network), *what is the way out?* Regards, Mayur
[ns] How to generate a Tracefile for the Application/Traffic/Trace input
Dear NS-Lovers, How to generate a _*binary format file*_ (with two columns, time_in_us, pkt_size ) as expected by the *Application/Traffic/Trace*? Any readymade utility code? I don't understand the exact format, that is expected here. And hence how to generate it I tried on the net, but could not find. Perhaps the question is too trivial... Mayur
Re: [ns] several sinks on single node
Dear Abdelhak, Answers are intermingled with your questions... Abdelhak Farsi wrote: Hi there, I assume that in 802.11 WLAN working in infrastructure mode, Your assumption is right but only for the latest version, 2.33. There are various versions of WLAN implementations available. Go through Section 16.3 in the NS (latest) Manual for further info. I have a base station sending different TCP streams to a single wireless station ( each stream is distinguished by the size of the packets sent and each stream represent a single application), my question is : Is it possible to define several sinks on the receiving wireless station to intercept all the streams? Yes, it is possible. A single node can have two (or more) TCPSink agents each receiving from different TCP source agents. What problem you faced, that could not be understood. best regards Mayur
Re: [ns] remove ns2
Dear Kamal, You need not remove it from your computer unless the Secondary memory is scarce, which I suppose, is not the issue. Just change the PATH, LD_LIBRARY and TCL_LIBRARY environment variables as asked after the installation of ns2.26, so that the ns2.26/ns is run instead of ns2.33/ns. And still you can switch to ns2.33 whenever you want just by following similar steps. To be more clever you can place some 'if . fi' statements in your HOME/.bashrc script so that you can switch to any of the available versions of ns. with regards, Mayur eng Rony wrote: Eng Rony how can i remove ns2.33 and i want the full steps to install ns2.26 thanks
Re: [ns] :how to reteive information from trace file
Dear Srirupa, For changing data rate use: #I assume you are using CBR... set traffic [new Application/Traffic/CBR] set CBRrate your desired data rate # and then $traffic set rate_ $CBRrate regards, Mayur Srirupa Dasgupta wrote: Dear ns-friends, I have created a new packet type named pong like ping...its corresponding tcl script is also running and producing a trace output file...now I want to plot the through-put by changing the time of the stop variable in tcl.but the trace file is remaining the same every timehow do I proceed?Now my agent only has a fixed packet size ..how do I set a data-rate in my new Agent?so that by varying the datarate I can plot the throughput?...what other factors can I plot? please help Srirupa On Fri, 22 Aug 2008 [EMAIL PROTECTED] wrote : Send Ns-users mailing list submissions to ns-users@isi.edu To subscribe or unsubscribe via the World Wide Web, visit http://mailman.isi.edu/mailman/listinfo/ns-users or, via email, send a message with subject or body 'help' to [EMAIL PROTECTED] You can reach the person managing the list at [EMAIL PROTECTED] When replying, please edit your Subject line so it is more specific than Re: Contents of Ns-users digest... Today's Topics: 1. sending packet with unicast node address in wireless mac (reza mohammadi) 2. question regarding rtproto manual (Rafiq Shaikh) 3. Re : installation problem (./install) (Nour) 4. :help on interpretation of wireless trace file (Srirupa Dasgupta) 5. Help concearning broadcasting messages - protocols ([EMAIL PROTECTED])ished with my new trace format but i can't set up the right 'tt' when i debug it with gdb i can't get the right code there! -- ___ 6. Re: :help on interpretation of wireless trace file (Mubashir Rehmani) 7. Re: Help concearning broadcasting messages - protocols (Mubashir Rehmani) 8. Log Node movements (Hector Agustin Cozzetti) 9. Log Node movements (mail corrected) (Hector Agustin Cozzetti) 10. Interpretation of wireless trace file (Mubashir Rehmani) 11. Broadcasting messages in ns2 (Mubashir Rehmani) 12. Re: Broadcasting messages in ns2 ([EMAIL PROTECTED]) 13. warning:deprecated conversion from string constant to 'char* (J S) 14. Re: warning:deprecated conversion from string constant to 'char* (Narcissus) 15. Re: Problems of Marc Greis' Tutorial (Narcissus) 16. Re: warning:deprecated conversion from string constant to 'char* (J S) 17. format routine in Trace.cc (Nick Zando) -- Message: 1 Date: Wed, 20 Aug 2008 19:21:58 -0700 (PDT) From: reza mohammadi [EMAIL PROTECTED] Subject: [ns] sending packet with unicast node address in wireless mac To: ns-users@ISI.EDU Message-ID: [EMAIL PROTECTED] Content-Type: text/plain; charset=us-ascii hello i design new mac protocol and i implement in wireless topology but in first packet sennding (RTS) destination address is -1 and -1 is br oadcast mac address. how i can send packet with unicat node address my output is shown below my simulation time is 5 minutes but in first packet sending below result occure when i implemetn my protocol only one packet exchange occurany body can help me? best regards hdr_src,src=0 hdr_type hdr_dst ,, dts=-1 hdr_src,src=0 hdr_type hdr_dst ,, dts=-1 ns: finish: X connection to :0.0 broken (explicit kill or server shutdown). -- Message: 2 Date: Wed, 20 Aug 2008 23:42:59 -0700 (PDT) From: Rafiq Shaikh [EMAIL PROTECTED] Subject: [ns] question regarding rtproto manual To: NS Users ns-users@ISI.EDU Message-ID: [EMAIL PROTECTED] Content-Type: text/plain; charset=us-ascii Hi All, I am trying to add manual route entry to the node_(3) as below: $ns rtproto Manual $node_(3) add-route-to-adj-node -default $node_(1) It gives me an error as below. I am new to NS-2 so can't understand what this means: (_o68 cmd line 1) invoked from within _o68 cmd add-route-to-adj-node -default _o32 invoked from within catch $self cmd $args ret invoked from within if [catch $self cmd $args ret] { set cls [$self info class] global errorInfo set savedInfo $errorInfo error error when calling class $cls: $args $... (procedure _o68 line 2) (SplitObject unknown line 2) invoked from within $node_(3) add-route-to-adj-node -default $node_(1) (file ./cp-1 line 11) invoked from within source.orig ./cp-1 (uplevel body line 1) invoked from within uplevel source.orig [list $fileName] invoked from within if [$instance_ is_http_url $fileName] { set buffer [$instance_ read_url $fileName] uplevel eval $buffer } else { uplevel source.orig [list $fileName
Re: [ns] Sending a packet - howto/where
Dear Fernando, recv() is used for sending too. Actually, it is pertaining to the Layered architecture. A lower layer RECEIVEs packet from its upper layer. Check carefully the code of recv() function and you will observe that it is first of all checking the 'direction' of the packet, if it is 'down' that it is a packet RECEIVED from its upper layer. Hope it's clear!
[ns] Segmentation fault for more simultaneous CBR traffics in Wireless simulation
Dear all, We are doing wireless static (802.11) chain topology simulation, for determining energy consumption per successful packet received. Actually we face segmentation fault (core dumped) for more than 4 CBR simultaneous active traffics. For 4 or less simultaneous flows it works nicely. In above case, we started the CBR traffics at 10 seconds and stop at 120 seconds. When we changed it to 10 seconds to 25 seconds resp, the (segmentation fault) problem started after 6 simultaneous traffics. We don't understand exactly, what is the problem? Is it some limitation of the ns? We are using: 1 ns 2.30 2 Operating System: Ubuntu 7.10, The output of uname -a is: Linux wsl1 2.6.22-14-generic #1 SMP Tue Feb 12 07:42:25 UTC 2008 i686 GNU/Linux 3 The machine has Intel Core 2 (Duo) processor and 2GB RAM, I've read the FAQ, ns-problems page, and manual and I couldn't find the answer there with regards, Mayur - Be a better friend, newshound, and know-it-all with Yahoo! Mobile. Try it now.
[ns] Basic power control 802.11 MAC
Dear all, We want to implement the basic power-control 802.11 MAC protocol, where RTS/CTS and DATA/ACK are transmitted using different power levels. How to select a particular power level for sending a particular type of packet? Mayur - Be a better friend, newshound, and know-it-all with Yahoo! Mobile. Try it now.
[ns] Problem in M/M/m Queueing system implementation.
Dear all, We have to implement M/M/m queueing system (eg m = 5 servers), with no buffer , lamda = 400 and mu = 10ms. Should we keep the Queue_limit ZERO or ONE for NO_BUFFER case here? We doubt because this gives all the packets (100%) dropped, while making the Queue_limit 2 (and more) reduces the dropped packets to less than 16%. Note: We used array of nodes for multiple servers and used array of UDP agents to attach with each of them. Will be thankful if someone can give a little hint on where we mistake! regards, Mayur - Be a better friend, newshound, and know-it-all with Yahoo! Mobile. Try it now.