On 4 May 2015 at 02:15, Hongbo Zhang <[email protected]> wrote:
> On 1 May 2015 at 00:41, Santosh Shukla <[email protected]> wrote:
>> On 30 April 2015 at 09:18, Mike Holmes <[email protected]> wrote:
>>> Is this a good result ?
>>>
>> Nope, and it looks like - they are more functional, perhaps lava
>> integrable material. He's using emulated nic.
>> We did talked about it in one of our odp-virt call.
>
> Right.
>
>>
>>> Are you able to get a comparison to the native platform SDK for the machine
>>> you ran on - if this was x86 can we run native DPDK ?
>>> If this was x86 I assume you used odp-dpdk but maybe you used linux-generic
>>> which will not perform well.
>>>
>>
>> I ran odp-dpdk in guest mode long back. It gives close to line rate
>> however lesser than plain dpdk running in guest. And we know the
>> root-cause. Venky, In very early days did highlighted in his report.
>> But that(s) a different problem and I guess odp-dpdk work likely to
>> address them.
>>
>> However, Hongbo can anyways create a lava setup where odp-dpdk (on x86
>> box, using dpdk favorable nic) doing l2fwd at guest. And that setup
>> shows result in pps, vcpu-utilization and if possible -rtt (i guess:
>> its not there, we'll have to write em).
>>
>
> Yes ,I would like to start a new discuss about setting l2fwd in guest
> in Lava, there are somethings needs to be discussed/confirmed before I
> take further actions.
>
Like ?

Or else setup an HO meet and send an invite for discussion. Thanks.

>> HTH!
>>
>>> On 30 April 2015 at 08:42, Hongbo Zhang <[email protected]> wrote:
>>>>
>>>> Hi,
>>>> I set up a test to run odp in vm guest to get the odp throughput in it,
>>>> idea is:
>>>> in the host run odp_generator to send pkt from host br0 to guest eth0,
>>>> and in the guest, run odp_l2fwd to forward packets from its eth0 to
>>>> eth1, and then in host, run odp_generator to get these packets form
>>>> br1.
>>>>
>>>> Here are steps of my test:
>>>> 0. install tools and compile odp in guest and host.
>>>> 1. host network interface preparation:
>>>> sudo tunctl -u root
>>>> sudo tunctl -u root
>>>> sudo ifconfig tap0 0.0.0.0 up
>>>> sudo ifconfig tap1 0.0.0.0 up
>>>> sudo brctl addbr br0
>>>> sudo brctl addbr br1
>>>> sudo brctl addif br0 tap0 eth2
>>>> sudo brctl addif br1 tap1 eth3
>>>> sudo ifconfig eth2 0.0.0.0
>>>> sudo ifconfig br0 10.0.3.15/24 up
>>>> sudo ifconfig eth3 0.0.0.0
>>>> sudo ifconfig br1 10.0.4.15/24 up
>>>> 2. launch the qemu vm
>>>> sudo qemu-system-i386 -hda debian_wheezy_i386_standard.qcow2 -net
>>>> nic,vlan=0 -net tap,vlan=0,ifname=tap0,script=no,downscript=no -net
>>>> nic,vlan=1 -net tap,vlan=1,ifname=tap1,script=no,downscript=no -smp 2
>>>> 3. guest network interface configuraton
>>>> ifconfig eth0 10.0.3.16/24 up
>>>> ifconfig eth1 10.0.4.16/24 up
>>>> 4. in the host, in one terminal:
>>>> sudo ./example/generator/odp_generator -m r -I br1
>>>> in another terminal:
>>>> sudo ./example/generator/odp_generator --srcmac 08:00:27:28:3e:ec
>>>> --dstmac 52:54:00:12:34:5-I br0 -m u --srcip 10.0.3.15 --dstip
>>>> 10.0.4.255
>>>> 5. in the guest, start l2fwd:
>>>> ./test/performance/odp_l2fwd - eth0,eth1 -m 0 -t 30
>>>>
>>>> Here are part of results log of l2fwd in guest:
>>>> ......
>>>> 1280 pps, 3158 max pps, 0 total drops
>>>> 1216 pps, 3158 max pps, 0 total drops
>>>> 2016 pps, 3158 max pps, 0 total drops
>>>> 1680 pps, 3158 max pps, 0 total drops
>>>> TEST RESULT: 3158 maximum packets per second.
>>>> _______________________________________________
>>>> lng-odp mailing list
>>>> [email protected]
>>>> https://lists.linaro.org/mailman/listinfo/lng-odp
>>>
>>>
>>>
>>>
>>> --
>>> Mike Holmes
>>> Technical Manager - Linaro Networking Group
>>> Linaro.org │ Open source software for ARM SoCs
>>>
>>>
>>>
>>> _______________________________________________
>>> lng-odp mailing list
>>> [email protected]
>>> https://lists.linaro.org/mailman/listinfo/lng-odp
>>>
_______________________________________________
lng-odp mailing list
[email protected]
https://lists.linaro.org/mailman/listinfo/lng-odp

Reply via email to