Hi Florin,

So I connected the two containers directly as Memif Master/Slave, taking in VPP vSwitch completely out. Performance is double - but still is pretty awful.

Could this be because I am not using DPDK under the hood in either Container?

Ray K

DBGvpp# test tcp clients nclients 1 mbytes 16 test-timeout 100 uri tcp://192.168.1.1/9000
1 three-way handshakes in .02 seconds 40.67/s
Test started at 308.999241
Test finished at 318.521999
16777216 bytes (16 mbytes, 0 gbytes) in 9.52 seconds
1761802.18 bytes/second full-duplex
.0141 gbit/second full-duplex

--------------- cone ---------------
DBGvpp# show error
   Count                    Node                  Reason
     23498              session-queue             Packets transmitted
4 tcp4-rcv-process Packets pushed into rx fifo 23498 tcp4-established Packets pushed into rx fifo
         4             ip4-icmp-input             echo replies sent
         1                arp-input               ARP replies sent
DBGvpp# show ha
              Name                Idx   Link  Hardware
local0                             0    down  local0
  local
memif0/0                           1     up   memif0/0
  Ethernet address 02:fe:70:35:68:de
  MEMIF interface
     instance 0

--------------- ctwo ---------------
DBGvpp# show error
   Count                    Node                  Reason
     23522              session-queue             Packets transmitted
2 tcp4-rcv-process Packets pushed into rx fifo 23522 tcp4-established Packets pushed into rx fifo
         1                ip4-glean               ARP requests sent
         4             ip4-icmp-input             unknown type
1 arp-input ARP request IP4 source address learned
DBGvpp# show ha
              Name                Idx   Link  Hardware
local0                             0    down  local0
  local
memif0/0                           1     up   memif0/0
  Ethernet address 02:fe:a3:b6:94:cd
  MEMIF interface
     instance 0


On 13/02/2018 16:37, Florin Coras wrote:
It would really help if read the whole email!

Apparently the test finishes, albeit with miserable performance! So, for some 
reason lots and lots of packets are lost and that’s what triggers the 
“heuristic” in the test client that complains the connection is stuck. What 
does “show error” say? Does memif output something for “show ha”?

Florin

On Feb 13, 2018, at 8:20 AM, Ray Kinsella <m...@ashroe.eu> wrote:

Still stuck ...

DBGvpp# test tcp clients nclients 1 mbytes 16 test-timeout 100 uri 
tcp://192.168.1.1/9000
1 three-way handshakes in .05 seconds 21.00/s
Test started at 205.983776
0: builtin_client_node_fn:216: stuck clients
Test finished at 229.687355
16777216 bytes (16 mbytes, 0 gbytes) in 23.70 seconds
707792.53 bytes/second full-duplex
.0057 gbit/second full-duplex

As a complete aside - pings appear to be slow enough through the VPP vSwitch?

DBGvpp# ping 192.168.1.2
64 bytes from 192.168.1.2: icmp_seq=2 ttl=64 time=64.8519 ms
64 bytes from 192.168.1.2: icmp_seq=3 ttl=64 time=69.1016 ms
64 bytes from 192.168.1.2: icmp_seq=4 ttl=64 time=64.1253 ms
64 bytes from 192.168.1.2: icmp_seq=5 ttl=64 time=62.5618 ms

Ray K


On 13/02/2018 16:10, Florin Coras wrote:
test-timeout 100






-=-=-=-=-=-=-=-=-=-=-=-
Links:

You receive all messages sent to this group.

View/Reply Online (#8220): https://lists.fd.io/g/vpp-dev/message/8220
View All Messages In Topic (6): https://lists.fd.io/g/vpp-dev/topic/11145094
Mute This Topic: https://lists.fd.io/mt/11145094/21656
New Topic: https://lists.fd.io/g/vpp-dev/post

Change Your Subscription: https://lists.fd.io/g/vpp-dev/editsub/21656
Group Home: https://lists.fd.io/g/vpp-dev
Contact Group Owner: vpp-dev+ow...@lists.fd.io
Terms of Service: https://lists.fd.io/static/tos
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to