Arnold,

I configured a 4 container experiment and ran a quick test:

 [Host 1] <- lan -> [Radio 1] <- OTA -> [Radio 2] <- lan -> [Host 2]
  LXC 1              LXC 2               LXC 3               LXC 4

iperf was run on Host 1 and Host 2 containers (UDP, 1400 byte packets,
100Mbps).  I observed a relatively consistent 95Mbps without assigning
specific CPUs to each container. Additional testing, which I did not
perform, might include specifically assigning 1 or more CPUs to each
container.

The RF Pipe downstream queue is not controlled via a configuration
parameter. If you wish to increase the queue size you'll need to make
the change here:

https://github.com/adjacentlink/emane/blob/master/src/models/mac/rfpipe/downstreamqueue.cc#L39

-- 
Steven Galgano
Adjacent Link LLC
www.adjacentlink.com


On 07/27/2018 03:18 PM, Zhongren Cao wrote:
> Hi Steve,
> 
> We are interested in emulation a high speed wireless network using RFpipe 
> NEMs. To start, we setup a small example, in which only two containers are 
> active representing two network nodes. The “datarate” in rfpipemaclayer is 
> set to be 100M. After starting EMANE, we ran iperf udp to measure the 
> throughput between the two emulated radio nodes. The path loss is set such 
> that we should get zero packet loss. Thus, we expect to get a throughput 
> measurement very close to 100Mbps. However, we couldn’t. Instead 100 Mbps, we 
> could only get to about 83 Mbps.
> 
> Upon investigation, we noticed that the transmitter’s RFpipe MAC dropped many 
> packets, as shown in the following statistics.
> 
> nem 2   mac  numDownstreamPacketsUnicastRx0 = 142669
> nem 2   mac  numDownstreamPacketsUnicastTx0 = 105646
> 
> and
> 
> nem 2   mac UnicastPacketDropTable0
> | NEM | SINR | Reg Id | Dst MAC | Queue Overflow | Bad Control | Bad Spectrum 
> Query | Flow Control |
> | 1   | 0    | 0      | 0       | 37023          | 0           | 0            
>       | 0            |
> 
> 
> The TX mac got 142669 unicast packets from its virtual transport layer bit 
> only sent 105646 unicast down to its phy layer. We also verified that these 
> 105646 packets was successfully delivered all the way to the RX virtual 
> transport and into the iperf server running at the RX node. 
> 
> How can we increase the buffer size to resolve the queue overflow issue?
> 
> Thanks,
> Arnold 
> 
> 
> 
_______________________________________________
emane-users mailing list
[email protected]
https://publists.nrl.navy.mil/mailman/listinfo/emane-users

Reply via email to