Dan,

In the unconstrained case, assuming you are oversubscribing the channel
based on your traffic rate and radio model data rate, you will see
higher bandwidth utilization and collisions. You can verify why the
radio model is dropping packets by looking at the mac
UnicastPacketDropTable0 and BroadcastPacketDropTable0 tables. You will
need to examine how the lost traffic affects your TCP sessions, taking
into account session configuration.

The avgTimedEventLatencyRatio statistic represents the ratio of the
timer latency over the requested timer duration, where timer latency is
the amount of time that it takes for the registered timer callback to be
called after the timer has fired. So, if you have a timer that is
scheduled to fire in 100ms and it takes 20ms for the callback to be
invoked after the timer expires, that would be a ratio of .2. Lower
ratios indicate more responsive systems. If you are seeing similar
numbers for both your constrained and unconstrained cases, that
indicates your system is not being taxed.

The UnicastPacketDropTable0 Dst MAC column for the 802.11abg radio model
indicates the number of upstream packets dropped because they are not
destined for the receiving NEM. As you stated, this is due to
promiscuous mode being disabled.

I would expect that if your nodes are setup A <-> B <-> C, you would see
'Dst MAC' drops on A for B sourced traffic to C and on C for B sourced
traffic to A. Your routing protocol should determine B is the next hop
for A and C to communicate, so I would not expect to see 'Dst MAC' drops
from any direct A to C or C to A communication attempts.

You can use phy UnicastPacketDropTable0 and BroadcastPacketDropTable0 to
verify your pathloss is setup properly for your desired topology, i.e. C
is dropping everything from A due to propagation model and vice versa.

Flow control will only work with linux kernel version < 3.8. The tuntap
driver changed and no longer supports back pressure. You can patch the
driver to achieve the desired effect, take a look here:

 https://github.com/adjacentlink/tun_flowctl

To access transport statistics, you will need to use 'internal'
transports. The most recent version of CORE may support this. For more
information see:

 https://github.com/adjacentlink/emane/wiki/Release-Notes#092

--
Steven Galgano
Adjacent Link LLC
www.adjacentlink.com



On 04/24/2015 09:43 AM, Dan O'Keeffe wrote:
> Hello,
> I'm trying to use the various tables and statistics provided by emanesh
> to debug a CORE/EMANE 802.11abg
> scenario, but even after reading the docs and tutorials I'm unclear as
> to the significance of some of the statistic
> measurements I'm seeing.
> 
> In my scenario I have three nodes in a chain, with a source application
> on  node 1 sending messages over TCP to
> an application running on node 2, which in turn sends the messages to a
> sink application on node 3. If I send
> data as fast as I can (i.e. rely on TCP backpressure), I see very large
> latencies (on the order of a few seconds) . If I
> rate limit the source to under utilize the channel, the latencies are on
> the order of
> tens of milliseconds.  Note that I have flow control enabled with the
> standard number of tokens (10).
> 
> What I'd like to understand is whether the latency numbers I'm seeing
> without rate limiting are realistic (and due
> to e.g. increased packet loss due to contention, TCP send socket
> buffering), or whether EMANE can't keep up and is introducing
> significant extra delay. In
> a previous email to the core-users mailing list it was suggested to use
> emanesh to examine various statistics regarding the phy, mac, and
> virtual transport layers
> (see Stephen Galgano's email below).  However, I'm having the following
> problems:
> 
> (i) emanesh only shows me statistics regarding the phy and mac layers,
> but nothing about the virtual transport (i.e.
> if I do a 'get stat * all', I only get information about mac and phy
> statistics). Is this something I have to enable explicitly?
> 
> (ii) I'm finding it difficult to reason about the significance of the
> numbers I'm seeing. When I compare the mac & phy statistics for
> the normal and rate-limited cases, they don't look much different.
> Should I conclude from this that EMANE isn't bottlenecked on
> mac or phy processing? Presuming all of the 'delay' statistics are in
> microsconds, most delays seem to be under a millisecond - does
> this seem reasonable? Some example measurements for the statistics
> suggested by Stephen for one of the nodes are below (for both
> the unconstrained and rate limited executions). Can anyone advise if
> they look 'normal'?
> 
> 'Unconstrained sender'
> -------------------------------
> h (localhost:47000)] ## get stat 3 mac
> nem 3   mac  avgDownstreamProcessingDelay0 = 326.528320312
> nem 3   mac  avgProcessAPIQueueDepth = 1.01472248449
> nem 3   mac  avgProcessAPIQueueWait = 21.5879439619
> nem 3   mac  avgTimedEventLatency = 40.8964598884
> nem 3   mac  avgTimedEventLatencyRatio = 0.131895821796
> nem 3   mac  avgUpstreamProcessingDelay0 = 465.360961914
> 
> [emanesh (localhost:47000)] ## get stat 3 phy
> nem 3   phy  avgDownstreamProcessingDelay0 = 3.34661269188
> nem 3   phy  avgProcessAPIQueueDepth = 1.00564204968
> nem 3   phy  avgProcessAPIQueueWait = 32.1192521699
> nem 3   phy  avgUpstreamProcessingDelay0 = 4.33834266663
> 
> 'Rate limited sender'
> ----------------------------
> [emanesh (localhost:47000)] ## get stat 3 mac
> nem 3   mac  avgDownstreamProcessingDelay0 = 240.181991577
> nem 3   mac  avgProcessAPIQueueDepth = 1.0200306159
> nem 3   mac  avgProcessAPIQueueWait = 24.7767970557
> nem 3   mac  avgTimedEventLatency = 48.3653869226
> nem 3   mac  avgTimedEventLatencyRatio = 0.247179240988
> nem 3   mac  avgUpstreamProcessingDelay0 = 422.160675049
> 
> [emanesh (localhost:47000)] ## get stat 3 phy
> nem 3   phy  avgDownstreamProcessingDelay0 = 3.57592797279
> nem 3   phy  avgProcessAPIQueueDepth = 1.01447435246
> nem 3   phy  avgProcessAPIQueueWait = 47.6275234891
> nem 3   phy  avgUpstreamProcessingDelay0 = 7.49943065643
> 
> (iii) The docs say that a latency event ratio near 1 is bad. Does that
> mean a ratio of 0.1-0.2 is ok?
> 
> (iv) What is the meaning of the 'Dst MAC' column of the
> UnicastPacketDropTable0 table (from e.g. get table * mac)? I see a large
> number of packet drops in that
> column for the unconstrained case. From turning up the logging on emane
> I think most of them are to do with promiscuous mode not being enabled
> or the
> receiver sensitivity not being high enough. Can I ignore those drops?
> 
> Any help much appreciated,
> Dan
> 
> 
> On 23/03/2015 14:24, Steven Galgano wrote:
>> Dan,
>>
>> Most emane 0.9 models contain a set of statistics that can be used to
>> determine how your emulation is performing. These statistics aim to show
>> average processing delay and timer latency. As you characterize your
>> hardware and scenario, you can monitor these to determine the falloff
>> point.
>>
>> Virtual Transport:
>>   avgDownstreamProcessingDelay
>>   avgProcessAPIQueueDepth
>>   avgProcessAPIQueueWait
>>   avgTimedEventLatency
>>   avgTimedEventLatencyRatio
>>   avgUpstreamProcessingDelay
>>
>> https://github.com/adjacentlink/emane/wiki/Virtual-Transport#Statistics
>>
>> IEEE802.11abg:
>>   avgDownstreamProcessingDelay0
>>   avgDownstreamProcessingDelay1
>>   avgDownstreamProcessingDelay2
>>   avgDownstreamProcessingDelay3
>>   avgProcessAPIQueueDepth
>>   avgProcessAPIQueueWait
>>   avgTimedEventLatency
>>   avgTimedEventLatencyRatio
>>   avgUpstreamProcessingDelay0
>>   avgUpstreamProcessingDelay1
>>   avgUpstreamProcessingDelay2
>>   avgUpstreamProcessingDelay3
>>
>> https://github.com/adjacentlink/emane/wiki/IEEE-802.11abg-Model#Statistics
>>
>>
>> Phy:
>>   avgDownstreamProcessingDelay0
>>   avgProcessAPIQueueDepth
>>   avgProcessAPIQueueWait
>>   avgTimedEventLatency
>>   avgTimedEventLatencyRatio
>>   avgUpstreamProcessingDelay0
>>
>> https://github.com/adjacentlink/emane/wiki/Physical-Layer-Model#Statistics
>>
>>
> 
_______________________________________________
emane-users mailing list
[email protected]
http://pf.itd.nrl.navy.mil/mailman/listinfo/emane-users

Reply via email to