Hi Ray, 

The only thing missing with memif is TCO but that shouldn’t be a reason for 
such a drop. I noticed you’re running a debug image, could you try with release 
as well?

Cheers,
Florin

> On Feb 14, 2018, at 7:42 AM, Ray Kinsella <m...@ashroe.eu> wrote:
> 
> 
> Hi Florin,
> 
> So I connected the two containers directly as Memif Master/Slave, taking in 
> VPP vSwitch completely out. Performance is double - but still is pretty awful.
> 
> Could this be because I am not using DPDK under the hood in either Container?
> 
> Ray K
> 
> DBGvpp# test tcp clients nclients 1 mbytes 16 test-timeout 100 uri 
> tcp://192.168.1.1/9000 <tcp://192.168.1.1/9000>
> 1 three-way handshakes in .02 seconds 40.67/s
> Test started at 308.999241
> Test finished at 318.521999
> 16777216 bytes (16 mbytes, 0 gbytes) in 9.52 seconds
> 1761802.18 bytes/second full-duplex
> .0141 gbit/second full-duplex
> 
> --------------- cone ---------------
> DBGvpp# show error
>   Count                    Node                  Reason
>     23498              session-queue             Packets transmitted
>         4            tcp4-rcv-process            Packets pushed into rx fifo
>     23498            tcp4-established            Packets pushed into rx fifo
>         4             ip4-icmp-input             echo replies sent
>         1                arp-input               ARP replies sent
> DBGvpp# show ha
>              Name                Idx   Link  Hardware
> local0                             0    down  local0
>  local
> memif0/0                           1     up   memif0/0
>  Ethernet address 02:fe:70:35:68:de
>  MEMIF interface
>     instance 0
> 
> --------------- ctwo ---------------
> DBGvpp# show error
>   Count                    Node                  Reason
>     23522              session-queue             Packets transmitted
>         2            tcp4-rcv-process            Packets pushed into rx fifo
>     23522            tcp4-established            Packets pushed into rx fifo
>         1                ip4-glean               ARP requests sent
>         4             ip4-icmp-input             unknown type
>         1                arp-input               ARP request IP4 source 
> address learned
> DBGvpp# show ha
>              Name                Idx   Link  Hardware
> local0                             0    down  local0
>  local
> memif0/0                           1     up   memif0/0
>  Ethernet address 02:fe:a3:b6:94:cd
>  MEMIF interface
>     instance 0
> 
> 
> On 13/02/2018 16:37, Florin Coras wrote:
>> It would really help if read the whole email!
>> Apparently the test finishes, albeit with miserable performance! So, for 
>> some reason lots and lots of packets are lost and that’s what triggers the 
>> “heuristic” in the test client that complains the connection is stuck. What 
>> does “show error” say? Does memif output something for “show ha”?
>> Florin
>>> On Feb 13, 2018, at 8:20 AM, Ray Kinsella <m...@ashroe.eu> wrote:
>>> 
>>> Still stuck ...
>>> 
>>> DBGvpp# test tcp clients nclients 1 mbytes 16 test-timeout 100 uri 
>>> tcp://192.168.1.1/9000
>>> 1 three-way handshakes in .05 seconds 21.00/s
>>> Test started at 205.983776
>>> 0: builtin_client_node_fn:216: stuck clients
>>> Test finished at 229.687355
>>> 16777216 bytes (16 mbytes, 0 gbytes) in 23.70 seconds
>>> 707792.53 bytes/second full-duplex
>>> .0057 gbit/second full-duplex
>>> 
>>> As a complete aside - pings appear to be slow enough through the VPP 
>>> vSwitch?
>>> 
>>> DBGvpp# ping 192.168.1.2
>>> 64 bytes from 192.168.1.2: icmp_seq=2 ttl=64 time=64.8519 ms
>>> 64 bytes from 192.168.1.2: icmp_seq=3 ttl=64 time=69.1016 ms
>>> 64 bytes from 192.168.1.2: icmp_seq=4 ttl=64 time=64.1253 ms
>>> 64 bytes from 192.168.1.2: icmp_seq=5 ttl=64 time=62.5618 ms
>>> 
>>> Ray K
>>> 
>>> 
>>> On 13/02/2018 16:10, Florin Coras wrote:
>>>> test-timeout 100
> 
> 

Reply via email to