On Wed, Jan 29, 2014 at 12:56 AM, Prashant Upadhyaya
<prashant.upadhyaya at aricent.com> wrote:
> Hi Pravin,
>
> I think your stuff is on the brink of a creating a mini revolution :)
>
> Some questions inline below --
> +    ovs-vsctl add-port br0 dpdk0 -- set Interface dpdk0 type=dpdk
> What do you mean by portid here, do you mean the physical interface id like 
> eth0 which I have bound to igb_uio now ?
> If I have multiple interfaces I have assigned igb_uio to, eg. eth0, eth1, 
> eth2 etc., what is the id mapping for those ?
>
Port id is id assigned by DPDK. DPDK interface takes this port id as
argument. Currently you need to look at pci id to figure out the
device mapping to port id. I know it is clean and I am exploring
better interface so that we can specify device names to ovs-vsctl.

> If I have VM's running, then typically how to interface those VM's to this 
> OVS in user space now, do I use the same classical 'tap' interface and add it 
> to the OVS above.

tap device will work, but you would not get performance primarily due
to scheduling delay and memcopy.
DPDK has multiple drivers to create interface with KVM guests OS.
those should perform better. I have no tried it yet.

> What is the actual path the data takes from the VM now all the way to the 
> switch, wouldn't it be hypervisor to kernel to OVS switch in user space to 
> other VM/Network ?

Depends on method you use. e.g. Memnic bypass hypervisor and host
kernel entirely.

> I think if we can solve the VM to OVS port connectivity remaining in 
> userspace only, then we have a great thing at our hand. Kindly comment on 
> this.
>
right, performance looks pretty good. Still DPDK needs constant
polling which consumes more power. RFC ovs-dkdp patch has simple
polling which need tweaking for better power usage.

Thanks,
Pravin.



> Regards
> -Prashant
>
>

Reply via email to