Hi everybody,

I am not sure but I think I found a bug in ns2. Specifically, it is Mac/Tdma
protocol for wireless network. In the example ~/tcl/ex/wireless.tcl, there
are 50 mobile nodes running 802.11 in 1000 seconds. I tried increasing
simulation time to 10000 seconds, and it's running fine. During execution, I
observed that ns-2 consumed around 80% CPU and 4.5% memory steadily.
Interestingly enough, when I changed the Mac protocol from 802.11 to
Mac/Tdma, keeping all other parameters the same, it crashed at around 7000
seconds simulation time. For comparison, I observed that in this case ns-2
took 90% CPU (which is fine), but also consumed a crazy amound of 90% memory
(my computer has 2GB RAM, this means ns2 took almost 1.8GB), increasing very
fast from 5% to 15%, 30%, ...etc...  I tried this with different machines,
even with a Dell server with strong computation and memory capacity, yet the
result was the same.

And the bug I got is this:
------------------------------------------------
terminate called after throwing an instance of 'std::bad_alloc'
  what():  St9bad_alloc
------------------------------------------------

I have been trying to find the reason, but nothing is found yet. My guess is
that Tdma allocates data for packets, but doesn't free it properly after
send/receive, which leads to memory leak. Indeed, at the sending site, the
function sendHandler(Event *e) does free the packet, and at the receive end,
the function recvHandler() sends the packet to upper level. Thus, I haven't
seen anything unreasonable here. I was trying to install and use dmalloc,
but compiling ns2 with dmalloc support gave me errors, so I gave up that
approach.

Anybody knows the reason/solution please tell me. Thanks.

Hai.

Reply via email to