I find that when I use NAT , the ARP is abnormal;

AS, I use the 3.3.3.5 as the NAT pool address: nat44 add address  3.3.3.5

On target NIC(3.3.3.1), the tcpdump can capture the ARP request without reply:

23:21:57.783994 ARP, Request who-has 3.3.3.5 tell 3.3.3.1, length 28





发件人: 李洪亮 <lihongli...@360.cn>
日期: 2017年12月21日 星期四 下午9:33
收件人: "Matus Fabian -X (matfabia - PANTHEON TECHNOLOGIES at Cisco)" 
<matfa...@cisco.com>
抄送: "vpp-dev@lists.fd.io" <vpp-dev@lists.fd.io>
主题: Re: The performance problem of NAT plugin

I can see the packet,and if I use one client,it works well


from my  iPhone

在 2017年12月21日,下午9:23,Matus Fabian -X (matfabia - PANTHEON TECHNOLOGIES at 
Cisco) <matfa...@cisco.com<mailto:matfa...@cisco.com>> 写道:
Hi,

There are processed some packets in both NAT directions (in2out and out2in) so 
some packets pass NAT plugin. Do you see some packets on interfaces?

Matus


From: 李洪亮 [mailto:lihongli...@360.cn]
Sent: Thursday, December 21, 2017 2:17 PM
To: Matus Fabian -X (matfabia - PANTHEON TECHNOLOGIES at Cisco) 
<matfa...@cisco.com<mailto:matfa...@cisco.com>>
Cc: vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>
Subject: Re: The performance problem of NAT plugin

VPP version is 17.10;  And the performance using deterministic NAT is well


The “show node counters”result:

[root@h101 hugepages]# vppctl show node counters
   Count                    Node                  Reason
       134              nat44-out2in              Good out2in packets processed
        88          nat44-in2out-slowpath         Good in2out packets processed
       289              nat44-in2out              Good in2out packets processed
       108                ip4-glean               address overflow drops
      1071                ip4-glean               ARP requests sent
        31                arp-input               ARP replies sent
      4179                arp-input               ARP request IP4 source 
address learned
        86              nat44-out2in              Good out2in packets processed
        43          nat44-in2out-slowpath         Good in2out packets processed
       172              nat44-in2out              Good in2out packets processed
       199                ip4-glean               address overflow drops
      1110                ip4-glean               ARP requests sent
       104              nat44-out2in              Good out2in packets processed
        76          nat44-in2out-slowpath         Good in2out packets processed
       226              nat44-in2out              Good in2out packets processed
       186                ip4-glean               address overflow drops
      1126                ip4-glean               ARP requests sent
        33          nat44-in2out-slowpath         Good in2out packets processed
        33              nat44-in2out              Good in2out packets processed
       204                ip4-glean               address overflow drops
      1007                ip4-glean               ARP requests sent


[root@h101 hugepages]# vppctl show nat44 detail
NAT plugin mode: dynamic translations enabled
TenGigabitEthernet81/0/1 out
TenGigabitEthernet81/0/0 in
218.30.116.2
  tenant VRF independent
  0 busy udp ports
  239 busy tcp ports
  0 busy icmp ports
4 workers
  vpp_wk_0
  vpp_wk_1
  vpp_wk_2
  vpp_wk_3
209 users, 1 outside addresses, 240 active sessions, 0 static mappings
Hash table in2out-ed
    0 active elements
    0 free lists
    0 linear search buckets
    0 cache hits, 0 cache misses
Hash table out2in-ed
    0 active elements
    0 free lists
    0 linear search buckets
    0 cache hits, 0 cache misses
Thread 1 (vpp_wk_0 at lcore 7):
  Hash table in2out
    88 active elements
    1 free lists
    0 linear search buckets
    0 cache hits, 0 cache misses
  Hash table out2in
    88 active elements
    1 free lists
    0 linear search buckets
    0 cache hits, 0 cache misses
  158 list pool elements
  10.16.82.84: 1 dynamic translations, 0 static translations
  10.16.80.146: 2 dynamic translations, 0 static translations
  10.16.83.67: 2 dynamic translations, 0 static translations
  10.16.81.113: 2 dynamic translations, 0 static translations
  10.16.83.107: 1 dynamic translations, 0 static translations
  10.16.82.188: 1 dynamic translations, 0 static translations
  10.16.80.130: 1 dynamic translations, 0 static translations
  10.16.83.27: 1 dynamic translations, 0 static translations
  10.16.81.61: 1 dynamic translations, 0 static translations
  10.16.81.205: 1 dynamic translations, 0 static translations
  10.16.83.191: 2 dynamic translations, 0 static translations





发件人: "Matus Fabian -X (matfabia - PANTHEON TECHNOLOGIES at Cisco)" 
<matfa...@cisco.com<mailto:matfa...@cisco.com>>
日期: 2017年12月21日 星期四 下午1:40
收件人: 李洪亮 <lihongli...@360.cn<mailto:lihongli...@360.cn>>
抄送: "vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>" 
<vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>>
主题: RE: The performance problem of NAT plugin

Hi,

What is your VPP version? From output you provided I see some NAT sessions. 
Could you please provide “show node counters” output and interface counters? I 
tested NAT plugin with 10K sessions and packet rate was over 10Mpps (2 
interfaces and 2 worker threads).

Regards,
Matus


From: vpp-dev-boun...@lists.fd.io<mailto:vpp-dev-boun...@lists.fd.io> 
[mailto:vpp-dev-boun...@lists.fd.io] On Behalf Of ???
Sent: Wednesday, December 20, 2017 4:33 PM
To: vpp-dev@lists.fd.io<mailto:vpp-dev@lists.fd.io>
Subject: [vpp-dev] The performance problem of NAT plugin

Hi All:

I want to use VPP NAT plugin as a typical SNAT .


(1.1.1.0/24)pkt_gen(2.2.2.1)|----| (2.2.2.2)VPP(3.3.3.2)|----|(3.3.3.1)target|


the configuration of VPP is below:

vppctl set interface state TenGigabitEthernet81/0/0 up
vppctl set interface state TenGigabitEthernet81/0/1 up
vppctl set interface ip addr TenGigabitEthernet81/0/0  2.2.2.2/24
vppctl set interface ip addr TenGigabitEthernet81/0/1 3.3.3.2/24
vppctl  set interface nat44 in TenGigabitEthernet81/0/0 out 
TenGigabitEthernet81/0/1
vppctl ip route add 0.0.0.0/0 via 3.3.3.1
vppctl ip route add 1.1.1.0/24 via 2.2.2.1
vppctl nat44 add address  3.3.3.5
vppctl nat44 add address  3.3.3.4
vppctl nat44 add address  3.3.3.3


it works well when I use one test-server with IP 1.1.1.10 to ping 3.3.3.1;
[@node2 ~]$ ping 3.3.3.1
PING 3.3.3.1 (3.3.3.1) 56(84) bytes of data.
64 bytes from 3.3.3.1: icmp_seq=1 ttl=252 time=1.22 ms
64 bytes from 3.3.3.1: icmp_seq=2 ttl=252 time=0.693 ms
64 bytes from 3.3.3.1: icmp_seq=3 ttl=252 time=0.949 ms
64 bytes from 3.3.3.1: icmp_seq=4 ttl=252 time=1.46 ms
64 bytes from 3.3.3.1: icmp_seq=5 ttl=252 time=1.21 ms
64 bytes from 3.3.3.1: icmp_seq=6 ttl=252 time=0.578 ms
64 bytes from 3.3.3.1: icmp_seq=7 ttl=252 time=0.701 ms

but if use pkt_gen to generate some packet with a low rate(100pps); the NAT 
does not work, AND I even can not ping the IP on VPP interface;
the show nat44 detail result;
vpp# show nat44 detail
NAT plugin mode: dynamic translations enabled
TenGigabitEthernet81/0/0 in
TenGigabitEthernet81/0/1 out
3.3.3.5
  tenant VRF independent
  0 busy udp ports
  315 busy tcp ports
  1 busy icmp ports
3.3.3.6
  tenant VRF independent
  0 busy udp ports
  0 busy tcp ports
  0 busy icmp ports
3.3.3.7
  tenant VRF independent
  0 busy udp ports
  0 busy tcp ports
 0 busy icmp ports
3.3.3.8
  tenant VRF independent
  0 busy udp ports
  0 busy tcp ports
  0 busy icmp ports
3.3.3.9
  tenant VRF independent
  0 busy udp ports
  0 busy tcp ports
  0 busy icmp ports
4 workers
  vpp_wk_0
  vpp_wk_1
  vpp_wk_2
  vpp_wk_3
245 users, 5 outside addresses, 328 active sessions, 0 static mappings
Hash table in2out-ed
    0 active elements
    0 free lists
    0 linear search buckets
    0 cache hits, 0 cache misses
Hash table out2in-ed
    0 active elements
    0 free lists
    0 linear search buckets
    0 cache hits, 0 cache misses
Thread 1 (vpp_wk_0 at lcore 7):
  Hash table in2out
    64 active elements
    1 free lists
    0 linear search buckets
    0 cache hits, 0 cache misses
  Hash table out2in
    64 active elements
    1 free lists
    0 linear search buckets
    0 cache hits, 0 cache misses
  125 list pool elements
  1.1.1.33: 2 dynamic translations, 0 static translations
  1.1.1.29: 2 dynamic translations, 0 static translations
  1.1.1.37: 2 dynamic translations, 0 static translations
  1.1.1.57: 1 dynamic translations, 0 static translations
  1.1.1.61: 1 dynamic translations, 0 static translations
  1.1.1.65: 1 dynamic translations, 0 static translations
  1.1.1.73: 1 dynamic translations, 0 static translations
  1.1.1.77: 1 dynamic translations, 0 static translations
  1.1.1.85: 1 dynamic translations, 0 static translations
  1.1.1.41: 1 dynamic translations, 0 static translations
  1.1.1.45: 1 dynamic translations, 0 static translations
  1.1.1.49: 1 dynamic translations, 0 static translations
  1.1.1.53: 1 dynamic translations, 0 static translations
  1.1.1.69: 1 dynamic translations, 0 static translations
  1.1.1.89: 1 dynamic translations, 0 static translations
  1.1.1.81: 1 dynamic translations, 0 static translations
  1.1.1.145: 1 dynamic translations, 0 static translations
  1.1.1.161: 1 dynamic translations, 0 static translations
  1.1.1.169: 1 dynamic translations, 0 static translations
  1.1.1.157: 1 dynamic translations, 0 static translations
  1.1.1.165: 1 dynamic translations, 0 static translations
  ……


If I use the deterministic NAT, it seems working well;

I want to know what’s wrong with the dynamic NAT;



_______________________________________________
vpp-dev mailing list
vpp-dev@lists.fd.io
https://lists.fd.io/mailman/listinfo/vpp-dev
  • [vpp-dev] The ... 李洪亮
    • Re: [vpp-... Matus Fabian -X (matfabia - PANTHEON TECHNOLOGIES at Cisco)
      • Re: [... 李洪亮
        • R... Matus Fabian -X (matfabia - PANTHEON TECHNOLOGIES at Cisco)
          • ... 李洪亮
            • ... 李洪亮
              • ... 李洪亮
                • ... Matus Fabian -X (matfabia - PANTHEON TECHNOLOGIES at Cisco)
                • ... 李洪亮
            • ... 李洪亮
              • ... 李洪亮
              • ... Matus Fabian -X (matfabia - PANTHEON TECHNOLOGIES at Cisco)
                • ... 李洪亮

Reply via email to