Re: Kernel Performance Tuning for High Volume SCTP traffic

2017-10-16 Thread Neil Horman
On Sat, Oct 14, 2017 at 10:29:53PM +0800, Traiano Welcome wrote:
> I've upped the value of the following sctp and udp related parameters,
> in the hope that this would help:
> 
> sysctl -w net.core.rmem_max=9
> sysctl -w net.core.wmem_max=9
> 
> sysctl -w net.sctp.sctp_mem="21 21 21"
> sysctl -w net.sctp.sctp_rmem="21 21 21"
> sysctl -w net.sctp.sctp_wmem="21 21 21"
> 
> sysctl -w net.ipv4.udp_mem="50 50 50"
> sysctl -w net.ipv4.udp_mem="100 100 100"
> 
> However, I'm still seeing rapidly incrementing rx discards reported on the 
> NIC:
> 
> :~# ethtool -S ens4f1 | egrep -i rx_discards
>  [0]: rx_discards: 6390805462
>  [1]: rx_discards: 6659315919
>  [2]: rx_discards: 6542570026
>  [3]: rx_discards: 6431513008
>  [4]: rx_discards: 6436779078
>  [5]: rx_discards: 6665897051
>  [6]: rx_discards: 6167985560
>  [7]: rx_discards: 11340068788
>  rx_discards: 56634934892
> 
If you're getting drops in the hardware and nothing in the higher layers is
overflowing, then your problem is likely due to one of two things:

1) The NIC is discarding frames for reasons orthogonal to provisioning.  That is
to say you are getting a large number of frames in that are being purposely
discarded. Check the other stats for the nic in ethtool to try give you some
additional visibilty on what these frames might be.

2) You're not servicing the NIC fast enough to pull frames out prior to its
internal buffer overflowing.  Check the interrupt mitigation and flow
director/ntuple settings to make sure that you're seeing proper spreading of
packets to per-cpu queues, that interrupt mitigation is preventing undue cpu
load, and that irqbalance is properly distributing that interrupt rate to all
cpus in the system

Neil

> Despite the fact that I've set the NIC ring buffer on the Netextreme
> interface to he maximum:
> 
> :~# ethtool -g ens4f0
> Ring parameters for ens4f0:
> Pre-set maximums:
> RX: 4078
> RX Mini:0
> RX Jumbo:   0
> TX: 4078
> Current hardware settings:
> RX: 4078
> RX Mini:0
> RX Jumbo:   0
> TX: 4078
> 
> I see no ip errors at the physical interface:
> 
> ethtool -S ens4f0 | egrep phy_ip_err_discard| tail -1
>  rx_phy_ip_err_discards: 0
> 
> 
> Could anyone suggest alternative approaches I might take to optimising
> the system's handling of SCTP traffic?
> 
> 
> 
> On Sat, Oct 14, 2017 at 12:35 AM, David Laight  
> wrote:
> > From: Traiano Welcome
> >> Sent: 13 October 2017 17:04
> >> On Fri, Oct 13, 2017 at 11:56 PM, David Laight  
> >> wrote:
> >> > From: Traiano Welcome
> >> >
> >> > (copied to netdev)
> >> >> Sent: 13 October 2017 07:16
> >> >> To: linux-s...@vger.kernel.org
> >> >> Subject: Kernel Performance Tuning for High Volume SCTP traffic
> >> >>
> >> >> Hi List
> >> >>
> >> >> I'm running a linux server processing high volumes of SCTP traffic and
> >> >> am seeing large numbers of packet overruns (ifconfig output).
> >> >
> >> > I'd guess that overruns indicate that the ethernet MAC is failing to
> >> > copy the receive frames into kernel memory.
> >> > It is probably running out of receive buffers, but might be
> >> > suffering from a lack of bus bandwidth.
> >> > MAC drivers usually discard receive frames if they can't get
> >> > a replacement buffer - so you shouldn't run out of rx buffers.
> >> >
> >> > This means the errors are probably below SCTP - so changing SCTP 
> >> > parameters
> >> > is unlikely to help.
> >>
> >> Does this mean that tuning UDP performance could help ? Or do you mean
> >> hardware (NIC) performance could be the issue?
> >
> > I'd certainly check UDP performance.
> >
> > David
> >
> --
> To unsubscribe from this list: send the line "unsubscribe linux-sctp" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 


Re: Kernel Performance Tuning for High Volume SCTP traffic

2017-10-14 Thread Traiano Welcome
I've upped the value of the following sctp and udp related parameters,
in the hope that this would help:

sysctl -w net.core.rmem_max=9
sysctl -w net.core.wmem_max=9

sysctl -w net.sctp.sctp_mem="21 21 21"
sysctl -w net.sctp.sctp_rmem="21 21 21"
sysctl -w net.sctp.sctp_wmem="21 21 21"

sysctl -w net.ipv4.udp_mem="50 50 50"
sysctl -w net.ipv4.udp_mem="100 100 100"

However, I'm still seeing rapidly incrementing rx discards reported on the NIC:

:~# ethtool -S ens4f1 | egrep -i rx_discards
 [0]: rx_discards: 6390805462
 [1]: rx_discards: 6659315919
 [2]: rx_discards: 6542570026
 [3]: rx_discards: 6431513008
 [4]: rx_discards: 6436779078
 [5]: rx_discards: 6665897051
 [6]: rx_discards: 6167985560
 [7]: rx_discards: 11340068788
 rx_discards: 56634934892

Despite the fact that I've set the NIC ring buffer on the Netextreme
interface to he maximum:

:~# ethtool -g ens4f0
Ring parameters for ens4f0:
Pre-set maximums:
RX: 4078
RX Mini:0
RX Jumbo:   0
TX: 4078
Current hardware settings:
RX: 4078
RX Mini:0
RX Jumbo:   0
TX: 4078

I see no ip errors at the physical interface:

ethtool -S ens4f0 | egrep phy_ip_err_discard| tail -1
 rx_phy_ip_err_discards: 0


Could anyone suggest alternative approaches I might take to optimising
the system's handling of SCTP traffic?



On Sat, Oct 14, 2017 at 12:35 AM, David Laight  wrote:
> From: Traiano Welcome
>> Sent: 13 October 2017 17:04
>> On Fri, Oct 13, 2017 at 11:56 PM, David Laight  
>> wrote:
>> > From: Traiano Welcome
>> >
>> > (copied to netdev)
>> >> Sent: 13 October 2017 07:16
>> >> To: linux-s...@vger.kernel.org
>> >> Subject: Kernel Performance Tuning for High Volume SCTP traffic
>> >>
>> >> Hi List
>> >>
>> >> I'm running a linux server processing high volumes of SCTP traffic and
>> >> am seeing large numbers of packet overruns (ifconfig output).
>> >
>> > I'd guess that overruns indicate that the ethernet MAC is failing to
>> > copy the receive frames into kernel memory.
>> > It is probably running out of receive buffers, but might be
>> > suffering from a lack of bus bandwidth.
>> > MAC drivers usually discard receive frames if they can't get
>> > a replacement buffer - so you shouldn't run out of rx buffers.
>> >
>> > This means the errors are probably below SCTP - so changing SCTP parameters
>> > is unlikely to help.
>>
>> Does this mean that tuning UDP performance could help ? Or do you mean
>> hardware (NIC) performance could be the issue?
>
> I'd certainly check UDP performance.
>
> David
>


RE: Kernel Performance Tuning for High Volume SCTP traffic

2017-10-13 Thread David Laight
From: Traiano Welcome
> Sent: 13 October 2017 17:04
> On Fri, Oct 13, 2017 at 11:56 PM, David Laight  
> wrote:
> > From: Traiano Welcome
> >
> > (copied to netdev)
> >> Sent: 13 October 2017 07:16
> >> To: linux-s...@vger.kernel.org
> >> Subject: Kernel Performance Tuning for High Volume SCTP traffic
> >>
> >> Hi List
> >>
> >> I'm running a linux server processing high volumes of SCTP traffic and
> >> am seeing large numbers of packet overruns (ifconfig output).
> >
> > I'd guess that overruns indicate that the ethernet MAC is failing to
> > copy the receive frames into kernel memory.
> > It is probably running out of receive buffers, but might be
> > suffering from a lack of bus bandwidth.
> > MAC drivers usually discard receive frames if they can't get
> > a replacement buffer - so you shouldn't run out of rx buffers.
> >
> > This means the errors are probably below SCTP - so changing SCTP parameters
> > is unlikely to help.
> 
> Does this mean that tuning UDP performance could help ? Or do you mean
> hardware (NIC) performance could be the issue?

I'd certainly check UDP performance.

David



Re: Kernel Performance Tuning for High Volume SCTP traffic

2017-10-13 Thread Traiano Welcome
Hi David

On Fri, Oct 13, 2017 at 11:56 PM, David Laight  wrote:
> From: Traiano Welcome
>
> (copied to netdev)
>> Sent: 13 October 2017 07:16
>> To: linux-s...@vger.kernel.org
>> Subject: Kernel Performance Tuning for High Volume SCTP traffic
>>
>> Hi List
>>
>> I'm running a linux server processing high volumes of SCTP traffic and
>> am seeing large numbers of packet overruns (ifconfig output).
>
> I'd guess that overruns indicate that the ethernet MAC is failing to
> copy the receive frames into kernel memory.
> It is probably running out of receive buffers, but might be
> suffering from a lack of bus bandwidth.
> MAC drivers usually discard receive frames if they can't get
> a replacement buffer - so you shouldn't run out of rx buffers.
>
> This means the errors are probably below SCTP - so changing SCTP parameters
> is unlikely to help.
>


Does this mean that tuning UDP performance could help ? Or do you mean
hardware (NIC) performance could be the issue?



> I'd make sure any receive interrupt coalescing/mitigation is turned off.
>

I'll try that.



> David
>
>
>> I think a large amount of performance tuning can probably be done to
>> improve the linux kernel's SCTP handling performance, but there seem
>> to be no guides on this available. Could anyone advise on this?
>>
>>
>> Here are my current settings, and below, some stats:
>>
>>
>> -
>> net.sctp.addip_enable = 0
>> net.sctp.addip_noauth_enable = 0
>> net.sctp.addr_scope_policy = 1
>> net.sctp.association_max_retrans = 10
>> net.sctp.auth_enable = 0
>> net.sctp.cookie_hmac_alg = sha1
>> net.sctp.cookie_preserve_enable = 1
>> net.sctp.default_auto_asconf = 0
>> net.sctp.hb_interval = 3
>> net.sctp.max_autoclose = 8589934
>> net.sctp.max_burst = 40
>> net.sctp.max_init_retransmits = 8
>> net.sctp.path_max_retrans = 5
>> net.sctp.pf_enable = 1
>> net.sctp.pf_retrans = 0
>> net.sctp.prsctp_enable = 1
>> net.sctp.rcvbuf_policy = 0
>> net.sctp.rto_alpha_exp_divisor = 3
>> net.sctp.rto_beta_exp_divisor = 2
>> net.sctp.rto_initial = 3000
>> net.sctp.rto_max = 6
>> net.sctp.rto_min = 1000
>> net.sctp.rwnd_update_shift = 4
>> net.sctp.sack_timeout = 50
>> net.sctp.sctp_mem = 61733040 82310730 123466080
>> net.sctp.sctp_rmem = 40960 8655000 41943040
>> net.sctp.sctp_wmem = 40960 8655000 41943040
>> net.sctp.sndbuf_policy = 0
>> net.sctp.valid_cookie_life = 6
>> -
>>
>>
>> I'm seeing a high rate of packet errors (almost all overruns) on both
>> 10gb NICs attached to my linux server.
>>
>> The system is handling high volumes of network traffic, so this is
>> likely a linux kernel tuning problem.
>>
>> All the normal tuning parameters I've tried thus far seems to be
>> having little effect and I'm still seeing high volumes of packet
>> overruns.
>>
>> Any pointers on other things I could try to get the system handling
>> SCTP packets efficiently would be much appreciated!
>>
>> -
>> :~# ifconfig ens4f1
>>
>> ens4f1Link encap:Ethernet  HWaddr 5c:b9:01:de:0d:4c
>>   UP BROADCAST RUNNING PROMISC MULTICAST  MTU:9000  Metric:1
>>   RX packets:22313514162 errors:17598241316 dropped:68
>> overruns:17598241316 frame:0
>>   TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
>>   collisions:0 txqueuelen:1000
>>   RX bytes:31767480894219 (31.7 TB)  TX bytes:0 (0.0 B)
>>   Interrupt:17 Memory:c980-c9ff
>> -
>>
>> System details:
>>
>> OS: Ubuntu Linux (4.11.0-14-generic #20~16.04.1-Ubuntu SMP x86_64 )
>> CPU Cores : 72
>> NIC Model : NetXtreme II BCM57810 10 Gigabit Ethernet
>> RAM   : 240 GiB
>>
>> NIC sample stats showing packet error rate:
>>
>> 
>>
>> for i in `seq 1 10`;do echo "$i) `date`" - $(ifconfig ens4f0| egrep
>> "RX"| egrep overruns;sleep 5);done
>>
>> 1) Thu Oct 12 19:50:40 SGT 2017 - RX packets:8364065830
>> errors:2594507718 dropped:215 overruns:2594507718 frame:0
>> 2) Thu Oct 12 19:50:45 SGT 2017 - RX packets:8365336060
>> errors:2596662672 dropped:215 overruns:2596662672 frame:0
>> 3) Thu Oct 12 19:50:50 SGT 2017 - RX packets:8366602087
>> errors:2598840959 dropped:215 overruns:2598840959 frame:0
>> 4) Thu Oct 12 19:50:55 SGT 2017 - RX packets:8367881271
>> errors:2600989229 dropped:215 overruns:2600989229 frame:0
>> 5) Thu Oct 12 19:51:01 SGT 2017 - RX packets:8369147536
>> errors:2603157030 dropped:215 overruns:2603157030frame:0
>> 6) Thu Oct 12 19:51:06 SGT 2017 - RX packets:8370149567
>> errors:2604904183 dropped:215 overruns:2604904183frame:0
>> 7) Thu Oct 12 19:51:11 SGT 2017 - RX packets:8371298018
>> errors:2607183939 dropped:215 overruns:2607183939frame:0
>> 8) Thu Oct 12 19:51:16 SGT 2017 - RX packets:8372455587
>> errors:2609411186 dropped:215 overruns:2609411186frame:0
>> 9) Thu Oct 12 19:51:21 SGT 2017 - RX packets:8373585102
>> errors:2611680597 dropped:215 overruns:2611680597 frame:0
>> 10) Thu Oct 12 19:51:26 SGT 2017 - RX packets:8374678508
>> errors:2614053000 dropped:215 overruns:2614053000 

RE: Kernel Performance Tuning for High Volume SCTP traffic

2017-10-13 Thread David Laight
From: Traiano Welcome

(copied to netdev)
> Sent: 13 October 2017 07:16
> To: linux-s...@vger.kernel.org
> Subject: Kernel Performance Tuning for High Volume SCTP traffic
> 
> Hi List
> 
> I'm running a linux server processing high volumes of SCTP traffic and
> am seeing large numbers of packet overruns (ifconfig output).

I'd guess that overruns indicate that the ethernet MAC is failing to
copy the receive frames into kernel memory.
It is probably running out of receive buffers, but might be
suffering from a lack of bus bandwidth.
MAC drivers usually discard receive frames if they can't get
a replacement buffer - so you shouldn't run out of rx buffers.

This means the errors are probably below SCTP - so changing SCTP parameters
is unlikely to help.

I'd make sure any receive interrupt coalescing/mitigation is turned off.

David
 

> I think a large amount of performance tuning can probably be done to
> improve the linux kernel's SCTP handling performance, but there seem
> to be no guides on this available. Could anyone advise on this?
> 
> 
> Here are my current settings, and below, some stats:
> 
> 
> -
> net.sctp.addip_enable = 0
> net.sctp.addip_noauth_enable = 0
> net.sctp.addr_scope_policy = 1
> net.sctp.association_max_retrans = 10
> net.sctp.auth_enable = 0
> net.sctp.cookie_hmac_alg = sha1
> net.sctp.cookie_preserve_enable = 1
> net.sctp.default_auto_asconf = 0
> net.sctp.hb_interval = 3
> net.sctp.max_autoclose = 8589934
> net.sctp.max_burst = 40
> net.sctp.max_init_retransmits = 8
> net.sctp.path_max_retrans = 5
> net.sctp.pf_enable = 1
> net.sctp.pf_retrans = 0
> net.sctp.prsctp_enable = 1
> net.sctp.rcvbuf_policy = 0
> net.sctp.rto_alpha_exp_divisor = 3
> net.sctp.rto_beta_exp_divisor = 2
> net.sctp.rto_initial = 3000
> net.sctp.rto_max = 6
> net.sctp.rto_min = 1000
> net.sctp.rwnd_update_shift = 4
> net.sctp.sack_timeout = 50
> net.sctp.sctp_mem = 61733040 82310730 123466080
> net.sctp.sctp_rmem = 40960 8655000 41943040
> net.sctp.sctp_wmem = 40960 8655000 41943040
> net.sctp.sndbuf_policy = 0
> net.sctp.valid_cookie_life = 6
> -
> 
> 
> I'm seeing a high rate of packet errors (almost all overruns) on both
> 10gb NICs attached to my linux server.
> 
> The system is handling high volumes of network traffic, so this is
> likely a linux kernel tuning problem.
> 
> All the normal tuning parameters I've tried thus far seems to be
> having little effect and I'm still seeing high volumes of packet
> overruns.
> 
> Any pointers on other things I could try to get the system handling
> SCTP packets efficiently would be much appreciated!
> 
> -
> :~# ifconfig ens4f1
> 
> ens4f1Link encap:Ethernet  HWaddr 5c:b9:01:de:0d:4c
>   UP BROADCAST RUNNING PROMISC MULTICAST  MTU:9000  Metric:1
>   RX packets:22313514162 errors:17598241316 dropped:68
> overruns:17598241316 frame:0
>   TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
>   collisions:0 txqueuelen:1000
>   RX bytes:31767480894219 (31.7 TB)  TX bytes:0 (0.0 B)
>   Interrupt:17 Memory:c980-c9ff
> -
> 
> System details:
> 
> OS: Ubuntu Linux (4.11.0-14-generic #20~16.04.1-Ubuntu SMP x86_64 )
> CPU Cores : 72
> NIC Model : NetXtreme II BCM57810 10 Gigabit Ethernet
> RAM   : 240 GiB
> 
> NIC sample stats showing packet error rate:
> 
> 
> 
> for i in `seq 1 10`;do echo "$i) `date`" - $(ifconfig ens4f0| egrep
> "RX"| egrep overruns;sleep 5);done
> 
> 1) Thu Oct 12 19:50:40 SGT 2017 - RX packets:8364065830
> errors:2594507718 dropped:215 overruns:2594507718 frame:0
> 2) Thu Oct 12 19:50:45 SGT 2017 - RX packets:8365336060
> errors:2596662672 dropped:215 overruns:2596662672 frame:0
> 3) Thu Oct 12 19:50:50 SGT 2017 - RX packets:8366602087
> errors:2598840959 dropped:215 overruns:2598840959 frame:0
> 4) Thu Oct 12 19:50:55 SGT 2017 - RX packets:8367881271
> errors:2600989229 dropped:215 overruns:2600989229 frame:0
> 5) Thu Oct 12 19:51:01 SGT 2017 - RX packets:8369147536
> errors:2603157030 dropped:215 overruns:2603157030frame:0
> 6) Thu Oct 12 19:51:06 SGT 2017 - RX packets:8370149567
> errors:2604904183 dropped:215 overruns:2604904183frame:0
> 7) Thu Oct 12 19:51:11 SGT 2017 - RX packets:8371298018
> errors:2607183939 dropped:215 overruns:2607183939frame:0
> 8) Thu Oct 12 19:51:16 SGT 2017 - RX packets:8372455587
> errors:2609411186 dropped:215 overruns:2609411186frame:0
> 9) Thu Oct 12 19:51:21 SGT 2017 - RX packets:8373585102
> errors:2611680597 dropped:215 overruns:2611680597 frame:0
> 10) Thu Oct 12 19:51:26 SGT 2017 - RX packets:8374678508
> errors:2614053000 dropped:215 overruns:2614053000 frame:0
> 
> 
> 
> However, checking (with tc) shows no ring buffer overruns on NIC:
> 
> 
> 
> tc -s qdisc show dev ens4f0|egrep drop
> 
> Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
> Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
> Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
> Sent 0 bytes 0 pkt (dropped 0,