Hi Damjan,

I'm trying to build and install vpp in LXC. I got vpp source code and built
it in LXC and created ".deb" files. Now, I've faced a problem; after
installing vpp, it cannot bind interfaces. In other words, vpp shows only
local0 when I execute "vppctl show interface" command and it does not show
the other interfaces.

Due to test memif throughput, I have gone through the following steps:
1. I've installed LXC.
2. I've created 2 LXC containers.
3. I've tried to build and install vpp inside LXC containers.
4. ... I'm got stuck here!
 In fact, I intended to setup the following scenario, but no physical
interfaces is binded to vpp! I appreciate any hint or document. (any
document for vpp configuration in LXC container for memif throughput test)

Trex --------> intfc----vpp-lxc-container1----memif <--------->
memif----vpp-lxc-container2----intfc -----------> Trex


Best Regards
Chore



On Tue, Feb 14, 2017 at 4:51 PM, Damjan Marion (damarion) <
[email protected]> wrote:

>
> I got first pings running over new shared memory interface driver.
> Code [1] is still very fragile, but basic packet forwarding works ...
>
> This interface defines master/slave relationship.
>
> Some characteristics:
>  - slave can run inside un-privileged containers
>  - master can run inside container, but it requires global PID namespace
> and PTRACE capability
>  - initial connection is done over the unix socket, so for container
> networking socket file needs to be mapped into container
>  - slave allocates shared memory for descriptor rings and passes FD to
> master
>  - slave is ring producer for both tx and rx, it fills rings with either
> full or empty buffers
>  - master is ring consumer, it reads descriptors and executes memcpy
> from/to buffer
>  - process_vm_readv, process_vm_writev linux system calls are used for
> copy of data directly between master and slave VM (it avoids 2nd memcpy)
>  - process_vm_* system calls are executed once per vector of packets
>  - from security perspective, slave doesn’t have access to master memory
>  - currently polling-only
>  - reconnection should just work - slave runs reconnect process in case
> when master disappears
>
> TODO:
>  - multi-queue
>  - interrupt mode (likely simple byte read/write to file descriptor)
>  - lightweight library to be used for non-VPP clients
>  - L3 mode ???
>  - perf tuning
>  - user-mode memcpy - master maps slave buffer memory directly…
>  - docs / specification
>
> At this point I would really like to hear feedback from people,
> specially from the usability side.
>
> config is basically:
>
> create memif socket /path/to/unix_socket.file [master|slave]
> set int state memif0 up
>
> DBGvpp# show interfaces
>               Name               Idx       State          Counter
> Count
> local0                            0        down
> memif0                            1         up
> DBGvpp# show interfaces address
> local0 (dn):
> memif0 (up):
>   172.16.0.2/24
> DBGvpp# ping 172.16.0.1
> 64 bytes from 172.16.0.1: icmp_seq=1 ttl=64 time=18.4961 ms
> 64 bytes from 172.16.0.1: icmp_seq=2 ttl=64 time=18.4282 ms
> 64 bytes from 172.16.0.1: icmp_seq=3 ttl=64 time=26.4333 ms
> 64 bytes from 172.16.0.1: icmp_seq=4 ttl=64 time=18.4255 ms
> 64 bytes from 172.16.0.1: icmp_seq=5 ttl=64 time=14.4133 ms
>
> Statistics: 5 sent, 5 received, 0% packet loss
> DBGvpp# show interfaces
>               Name               Idx       State          Counter
> Count
> local0                            0        down
> memif0                            1         up       rx packets
>          5
>                                                      rx bytes
>        490
>                                                      tx packets
>          5
>                                                      tx bytes
>        490
>                                                      drops
>           5
>                                                      ip4
>           5
>
>
>
>
> [1] https://gerrit.fd.io/r/#/c/5004/
>
>
> _______________________________________________
> vpp-dev mailing list
> [email protected]
> https://lists.fd.io/mailman/listinfo/vpp-dev
_______________________________________________
vpp-dev mailing list
[email protected]
https://lists.fd.io/mailman/listinfo/vpp-dev

Reply via email to