Re: mbuf_jumbo_9k & iSCSI failing

2017-06-26 Thread Zaphod Beeblebrox
Don't forget that, generally, as I understand it, the network stack suffers
from the same problem for 9k buffers.

On Sun, Jun 25, 2017 at 12:56 PM, Ben RUBSON  wrote:

> > On 25 Jun 2017, at 17:32, Ryan Stone  wrote:
> >
> > Having looking at the original email more closely, I see that you showed
> an mlxen interface with a 9020 MTU.  Seeing allocation failures of 9k mbuf
> clusters increase while you are far below the zone's limit means that
> you're definitely running into the bug I'm describing, and this bug could
> plausibly cause the iSCSI errors that you describe.
> >
> > The issue is that the newer version of the driver tries to allocate a
> single buffer to accommodate an MTU-sized packet.  Over time, however,
> memory will become fragmented and eventually it can become impossible to
> allocate a 9k physically contiguous buffer.  When this happens the driver
> is unable to allocate buffers to receive packets and is forced to drop
> them.  Presumably, if iSCSI suffers too many packet drops it will terminate
> the connection.  The older version of the driver limited itself to
> page-sized buffers, so it was immune to issues with memory fragmentation.
>
> Thank you for your explanation Ryan.
> You say "over time", and you're right, I have to wait several days (here
> 88) before the problem occurs.
> Strange however that in 2500MB free memory system is unable to find 9k
> physically contiguous. But we never know :)
>
> Let's then wait for your patch !
> (and reboot for now)
>
> Many thx !
>
> Ben
> ___
> freebsd-net@freebsd.org mailing list
> https://lists.freebsd.org/mailman/listinfo/freebsd-net
> To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"
>
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Re: Sporadic TCP/RST sent to client

2017-06-26 Thread Matt Joras
Out of curiosity, what sort of lagg setup are you using that's causing
the TCP packets to be split across the two lagg interfaces?

Matt

On Mon, Jun 26, 2017 at 1:35 PM, Navdeep Parhar  wrote:
> On Thu, Jun 22, 2017 at 3:57 PM, Youssef  GHORBAL
>  wrote:
>> Hello,
>>
>> I'm having an issue with a FreeBSD 11 based system, sending 
>> sporadically TCP/RST to clients after initial TCP session correctly 
>> initiated.
>> The sequence goes this way :
>>
>> 1 Client -> Server : SYN
>> 2 Server -> Client : SYN/ACK
>> 3 Client -> Server : ACK
>> 4 Client -> Server : PSH/ACK (upper protocol data sending starts 
>> here)
>> 5 Server -> Client : RST
>>
>> - The problem happens sporadically, same client and same server can 
>> communicate smoothely on the same service port. But from time to time 
>> (hours, sometime days) the previous sequence happens.
>> - The service running on server is not responsible for the RST sent. 
>> The service was deeply profiled and nothing happens to justify the RST.
>> - tcpdump on the server side assures that packet arrives timely 
>> ordered.
>> - the traffic is very light. Some TCP sessions per day.
>> - the server is connected using a lagg enslaving two cxgb interfaces.
>>
>> In my effort to diagnose the problem (try to have a reproductible 
>> test case) I noticed that the issue is triggered most likely when those two 
>> conditions are met :
>> - the ACK (in step 3) and the PSH/ACK (in step 4) arrive on 
>> different lagg NICs.
>> - the timing between those two packets is sub 10 microseconds.
>>
>> When searching the interwebs I came across a strangely similar issue 
>> reported here 7 years ago :
>> 
>> https://lists.freebsd.org/pipermail/freebsd-net/2010-August/026029.html
>>
>> (The OP seemed to have resolved his issue changing the netisr policy 
>> from direct to hybrid. but no reference of laggs being used)
>>
>> I'm pretty sure that I'm hitting some race condition, a scenario 
>> where due to multithreading the PSH/ACK is somehow handled before the ACK 
>> making the kernel rising TCP/RST since the initial TCP handshake did'nt 
>> finish yet.
>>
>> I've read about netisr work and I was under the impression that even 
>> if it's SMP enabled it was made to keep prorocol ordering.
>>
>> What's the expected behaviour in this scenario on the netisr side ?
>> How can I push the investigation further ?
>
> I think you've already figured out the situation here -- the PSH/ACK is likely
> being handled before the ACK for the SYN because they arrived on different
> interfaces.  There is nothing in netisr dispatch that will maintain protocol
> ordering in this case.
>
> Regards,
> Navdeep
> ___
> freebsd-net@freebsd.org mailing list
> https://lists.freebsd.org/mailman/listinfo/freebsd-net
> To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Re: Sporadic TCP/RST sent to client

2017-06-26 Thread Navdeep Parhar
On Thu, Jun 22, 2017 at 3:57 PM, Youssef  GHORBAL
 wrote:
> Hello,
>
> I'm having an issue with a FreeBSD 11 based system, sending 
> sporadically TCP/RST to clients after initial TCP session correctly initiated.
> The sequence goes this way :
>
> 1 Client -> Server : SYN
> 2 Server -> Client : SYN/ACK
> 3 Client -> Server : ACK
> 4 Client -> Server : PSH/ACK (upper protocol data sending starts here)
> 5 Server -> Client : RST
>
> - The problem happens sporadically, same client and same server can 
> communicate smoothely on the same service port. But from time to time (hours, 
> sometime days) the previous sequence happens.
> - The service running on server is not responsible for the RST sent. 
> The service was deeply profiled and nothing happens to justify the RST.
> - tcpdump on the server side assures that packet arrives timely 
> ordered.
> - the traffic is very light. Some TCP sessions per day.
> - the server is connected using a lagg enslaving two cxgb interfaces.
>
> In my effort to diagnose the problem (try to have a reproductible 
> test case) I noticed that the issue is triggered most likely when those two 
> conditions are met :
> - the ACK (in step 3) and the PSH/ACK (in step 4) arrive on different 
> lagg NICs.
> - the timing between those two packets is sub 10 microseconds.
>
> When searching the interwebs I came across a strangely similar issue 
> reported here 7 years ago :
> 
> https://lists.freebsd.org/pipermail/freebsd-net/2010-August/026029.html
>
> (The OP seemed to have resolved his issue changing the netisr policy 
> from direct to hybrid. but no reference of laggs being used)
>
> I'm pretty sure that I'm hitting some race condition, a scenario 
> where due to multithreading the PSH/ACK is somehow handled before the ACK 
> making the kernel rising TCP/RST since the initial TCP handshake did'nt 
> finish yet.
>
> I've read about netisr work and I was under the impression that even 
> if it's SMP enabled it was made to keep prorocol ordering.
>
> What's the expected behaviour in this scenario on the netisr side ?
> How can I push the investigation further ?

I think you've already figured out the situation here -- the PSH/ACK is likely
being handled before the ACK for the SYN because they arrived on different
interfaces.  There is nothing in netisr dispatch that will maintain protocol
ordering in this case.

Regards,
Navdeep
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Request for reviewers for vlan(4) locking improvements

2017-06-26 Thread Matt Joras
Hello,

I am looking for people to give feedback on a review I've opened to
improve the locking in vlan(4). Anyone who's done a fair amount of
destroying vlan interfaces on live systems has probably run into
panics in if_vlan. This is because there is no real synchronization to
prevent a vlan interface from being destroyed while there are mbufs in
the network going through its functions. Isilon's customers have hit
panics like this, so I've reworked the locking to make destroying
vlans safe on live systems, and fixed every instance of unsafe access
I could find.

If anyone has an interest in this work please review the revision:
https://reviews.freebsd.org/D11370

Thanks,
Matt Joras
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Re: mbuf_jumbo_9k & iSCSI failing

2017-06-26 Thread Andrey V. Elsukov
On 26.06.2017 19:26, Matt Joras wrote:
> I didn't think that ixgbe(4) still suffered from this problem, and we
> use it in the same situations rstone mentioned above. Indeed, ixgbe(4)
> doesn't presently suffer from this problem (you can see that in your
> patch, as it is only effectively changing the other drivers), though
> it used to. It looks like it was first fixed to not to in r280182.
> 

Yes, actually we have this patch since 8.x. Recent drivers aren't
affected by this problem. iflib also has the code:

#ifndef CONTIGMALLOC_WORKS
else
fl->ifl_buf_size = MJUMPAGESIZE;
#else
else if (sctx->isc_max_frame_size <= 4096)
fl->ifl_buf_size = MJUMPAGESIZE;
else if (sctx->isc_max_frame_size <= 9216)
fl->ifl_buf_size = MJUM9BYTES;
else
fl->ifl_buf_size = MJUM16BYTES;
#endif

that seems by default doesn't use 9-16k mbufs.

-- 
WBR, Andrey V. Elsukov



signature.asc
Description: OpenPGP digital signature


Re: mbuf_jumbo_9k & iSCSI failing

2017-06-26 Thread Matt Joras
On Mon, Jun 26, 2017 at 6:36 AM, Andrey V. Elsukov  wrote:
> On 26.06.2017 16:29, Ben RUBSON wrote:
>>
>>> On 26 Jun 2017, at 15:25, Andrey V. Elsukov  wrote:
>>>
>>> On 26.06.2017 16:27, Ben RUBSON wrote:

> On 26 Jun 2017, at 15:13, Andrey V. Elsukov  wrote:
>
> I think it is not mlxen specific problem, we have the same symptoms with
> ixgbe(4) driver too. To avoid the problem we have patches that are
> disable using of 9k mbufs, and instead only use 4k mbufs.

 Interesting feedback Andrey, thank you !
 The problem may be then "general".
 So you still use large MTU (>=9000) but only allocating 4k mbufs, as a 
 workaround ?
>>>
>>> Yes.
>>
>> Is it a kernel patch or a driver/ixgbe patch ?
>
> I attached it.
>
> --
> WBR, Andrey V. Elsukov

I didn't think that ixgbe(4) still suffered from this problem, and we
use it in the same situations rstone mentioned above. Indeed, ixgbe(4)
doesn't presently suffer from this problem (you can see that in your
patch, as it is only effectively changing the other drivers), though
it used to. It looks like it was first fixed to not to in r280182.
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Re: mbuf_jumbo_9k & iSCSI failing

2017-06-26 Thread YongHyeon PYUN
On Mon, Jun 26, 2017 at 03:44:58PM +0200, Julien Cigar wrote:
> On Mon, Jun 26, 2017 at 04:13:33PM +0300, Andrey V. Elsukov wrote:
> > On 25.06.2017 18:32, Ryan Stone wrote:
> > > Having looking at the original email more closely, I see that you showed 
> > > an
> > > mlxen interface with a 9020 MTU.  Seeing allocation failures of 9k mbuf
> > > clusters increase while you are far below the zone's limit means that
> > > you're definitely running into the bug I'm describing, and this bug could
> > > plausibly cause the iSCSI errors that you describe.
> > > 
> > > The issue is that the newer version of the driver tries to allocate a
> > > single buffer to accommodate an MTU-sized packet.  Over time, however,
> > > memory will become fragmented and eventually it can become impossible to
> > > allocate a 9k physically contiguous buffer.  When this happens the driver
> > > is unable to allocate buffers to receive packets and is forced to drop
> > > them.  Presumably, if iSCSI suffers too many packet drops it will 
> > > terminate
> > > the connection.  The older version of the driver limited itself to
> > > page-sized buffers, so it was immune to issues with memory fragmentation.
> > 
> > I think it is not mlxen specific problem, we have the same symptoms with
> > ixgbe(4) driver too. To avoid the problem we have patches that are
> > disable using of 9k mbufs, and instead only use 4k mbufs.
> 
> I had the same issue on a lightly loaded HP DL20 machine (BCM5720 
> chipsets), 8GB of RAM, running 10.3. Problem usually happens
> within 30 days with 9k jumbo clusters allocation failure.
> 

This looks strange to me.  If I recall correctly bge(4) does not
request physically contiguous 9k jumbo buffers for BCM5720 so it
wouldn't suffer from memory fragmentation. (It uses m_cljget() and
takes advantage of extended RX BDs to handle up to 4 DMA segments).
If your controller is either BCM5714/BCM5715 or BCM5780, it
requires physically contiguous 9k jumbo buffers to handle jumbo
frames though.
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Re: mbuf_jumbo_9k & iSCSI failing

2017-06-26 Thread Ben RUBSON

> On 26 Jun 2017, at 15:13, Andrey V. Elsukov  wrote:
> 
> I think it is not mlxen specific problem, we have the same symptoms with
> ixgbe(4) driver too. To avoid the problem we have patches that are
> disable using of 9k mbufs, and instead only use 4k mbufs.

Another workaround is to decrease the MTU until 9K mbufs are not more used.
On my systems it gives a 4072 bytes MTU.
It solved the issue without having to reboot.
Of course it's just a workaround, as decreasing MTU increases overhead...
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Re: mbuf_jumbo_9k & iSCSI failing

2017-06-26 Thread Julien Cigar
On Mon, Jun 26, 2017 at 04:13:33PM +0300, Andrey V. Elsukov wrote:
> On 25.06.2017 18:32, Ryan Stone wrote:
> > Having looking at the original email more closely, I see that you showed an
> > mlxen interface with a 9020 MTU.  Seeing allocation failures of 9k mbuf
> > clusters increase while you are far below the zone's limit means that
> > you're definitely running into the bug I'm describing, and this bug could
> > plausibly cause the iSCSI errors that you describe.
> > 
> > The issue is that the newer version of the driver tries to allocate a
> > single buffer to accommodate an MTU-sized packet.  Over time, however,
> > memory will become fragmented and eventually it can become impossible to
> > allocate a 9k physically contiguous buffer.  When this happens the driver
> > is unable to allocate buffers to receive packets and is forced to drop
> > them.  Presumably, if iSCSI suffers too many packet drops it will terminate
> > the connection.  The older version of the driver limited itself to
> > page-sized buffers, so it was immune to issues with memory fragmentation.
> 
> I think it is not mlxen specific problem, we have the same symptoms with
> ixgbe(4) driver too. To avoid the problem we have patches that are
> disable using of 9k mbufs, and instead only use 4k mbufs.

I had the same issue on a lightly loaded HP DL20 machine (BCM5720 
chipsets), 8GB of RAM, running 10.3. Problem usually happens
within 30 days with 9k jumbo clusters allocation failure.

> 
> -- 
> WBR, Andrey V. Elsukov
> 




-- 
Julien Cigar
Belgian Biodiversity Platform (http://www.biodiversity.be)
PGP fingerprint: EEF9 F697 4B68 D275 7B11  6A25 B2BB 3710 A204 23C0
No trees were killed in the creation of this message.
However, many electrons were terribly inconvenienced.


signature.asc
Description: PGP signature


Re: mbuf_jumbo_9k & iSCSI failing

2017-06-26 Thread Ben RUBSON
> On 26 Jun 2017, at 15:36, Andrey V. Elsukov  wrote:
> 
> On 26.06.2017 16:29, Ben RUBSON wrote:
>> 
>>> On 26 Jun 2017, at 15:25, Andrey V. Elsukov  wrote:
>>> 
>>> On 26.06.2017 16:27, Ben RUBSON wrote:
 
> On 26 Jun 2017, at 15:13, Andrey V. Elsukov  wrote:
> 
> I think it is not mlxen specific problem, we have the same symptoms with
> ixgbe(4) driver too. To avoid the problem we have patches that are
> disable using of 9k mbufs, and instead only use 4k mbufs.
 
 Interesting feedback Andrey, thank you !
 The problem may be then "general".
 So you still use large MTU (>=9000) but only allocating 4k mbufs, as a 
 workaround ?
>>> 
>>> Yes.
>> 
>> Is it a kernel patch or a driver/ixgbe patch ?
> 
> I attached it.

Thank you !
The idea of new sysctls to enable/disable the workaround is nice.
Should be easy to modify to use with mlx4_en, waiting for Ryan specific work on 
this driver. 

I found a similar issue, reported date : 2013-10-28
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=183381

FreeBSD certainly needs a general solid patch !
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Re: mbuf_jumbo_9k & iSCSI failing

2017-06-26 Thread Andrey V. Elsukov
On 26.06.2017 16:29, Ben RUBSON wrote:
> 
>> On 26 Jun 2017, at 15:25, Andrey V. Elsukov  wrote:
>>
>> On 26.06.2017 16:27, Ben RUBSON wrote:
>>>
 On 26 Jun 2017, at 15:13, Andrey V. Elsukov  wrote:

 I think it is not mlxen specific problem, we have the same symptoms with
 ixgbe(4) driver too. To avoid the problem we have patches that are
 disable using of 9k mbufs, and instead only use 4k mbufs.
>>>
>>> Interesting feedback Andrey, thank you !
>>> The problem may be then "general".
>>> So you still use large MTU (>=9000) but only allocating 4k mbufs, as a 
>>> workaround ?
>>
>> Yes.
> 
> Is it a kernel patch or a driver/ixgbe patch ?

I attached it.

-- 
WBR, Andrey V. Elsukov
From 57b4789b7f6699a581ded2f4e07c7b12737af1e7 Mon Sep 17 00:00:00 2001
From: "Andrey V. Elsukov" 
Date: Thu, 6 Oct 2016 14:56:37 +0300
Subject: [PATCH 04/65] Add m_preferredsize() and use it in all intel's
 drivers.

---
 sys/dev/e1000/if_em.c |  7 +--
 sys/dev/e1000/if_igb.c|  7 +--
 sys/dev/ixgbe/if_ix.c |  5 +
 sys/dev/ixgbe/if_ixv.c|  5 +
 sys/dev/ixl/if_ixlv.c |  5 +
 sys/dev/ixl/ixl_pf_main.c |  5 +
 sys/kern/kern_mbuf.c  | 35 +++
 sys/sys/mbuf.h|  1 +
 8 files changed, 42 insertions(+), 28 deletions(-)

diff --git a/sys/dev/e1000/if_em.c b/sys/dev/e1000/if_em.c
index 7e2690eae08..1af66b7c519 100644
--- a/sys/dev/e1000/if_em.c
+++ b/sys/dev/e1000/if_em.c
@@ -1421,12 +1421,7 @@ em_init_locked(struct adapter *adapter)
 	** Figure out the desired mbuf
 	** pool for doing jumbos
 	*/
-	if (adapter->hw.mac.max_frame_size <= 2048)
-		adapter->rx_mbuf_sz = MCLBYTES;
-	else if (adapter->hw.mac.max_frame_size <= 4096)
-		adapter->rx_mbuf_sz = MJUMPAGESIZE;
-	else
-		adapter->rx_mbuf_sz = MJUM9BYTES;
+	adapter->rx_mbuf_sz = m_preferredsize(adapter->hw.mac.max_frame_size);
 
 	/* Prepare receive descriptors and buffers */
 	if (em_setup_receive_structures(adapter)) {
diff --git a/sys/dev/e1000/if_igb.c b/sys/dev/e1000/if_igb.c
index 8e018995029..bfaecae1f71 100644
--- a/sys/dev/e1000/if_igb.c
+++ b/sys/dev/e1000/if_igb.c
@@ -1325,12 +1325,7 @@ igb_init_locked(struct adapter *adapter)
 	** Figure out the desired mbuf pool
 	** for doing jumbo/packetsplit
 	*/
-	if (adapter->max_frame_size <= 2048)
-		adapter->rx_mbuf_sz = MCLBYTES;
-	else if (adapter->max_frame_size <= 4096)
-		adapter->rx_mbuf_sz = MJUMPAGESIZE;
-	else
-		adapter->rx_mbuf_sz = MJUM9BYTES;
+	adapter->rx_mbuf_sz = m_preferredsize(adapter->max_frame_size);
 
 	/* Prepare receive descriptors and buffers */
 	if (igb_setup_receive_structures(adapter)) {
diff --git a/sys/dev/ixgbe/if_ix.c b/sys/dev/ixgbe/if_ix.c
index cf2231dc8fc..26fce2704ba 100644
--- a/sys/dev/ixgbe/if_ix.c
+++ b/sys/dev/ixgbe/if_ix.c
@@ -1118,10 +1118,7 @@ ixgbe_init_locked(struct adapter *adapter)
 	ixgbe_set_multi(adapter);
 
 	/* Determine the correct mbuf pool, based on frame size */
-	if (adapter->max_frame_size <= MCLBYTES)
-		adapter->rx_mbuf_sz = MCLBYTES;
-	else
-		adapter->rx_mbuf_sz = MJUMPAGESIZE;
+	adapter->rx_mbuf_sz = m_preferredsize(adapter->max_frame_size);
 
 	/* Prepare receive descriptors and buffers */
 	if (ixgbe_setup_receive_structures(adapter)) {
diff --git a/sys/dev/ixgbe/if_ixv.c b/sys/dev/ixgbe/if_ixv.c
index 80fb1b34be3..5062affb779 100644
--- a/sys/dev/ixgbe/if_ixv.c
+++ b/sys/dev/ixgbe/if_ixv.c
@@ -698,10 +698,7 @@ ixv_init_locked(struct adapter *adapter)
 	** Determine the correct mbuf pool
 	** for doing jumbo/headersplit
 	*/
-	if (ifp->if_mtu > ETHERMTU)
-		adapter->rx_mbuf_sz = MJUMPAGESIZE;
-	else
-		adapter->rx_mbuf_sz = MCLBYTES;
+	adapter->rx_mbuf_sz = m_preferredsize(ifp->if_mtu);
 
 	/* Prepare receive descriptors and buffers */
 	if (ixgbe_setup_receive_structures(adapter)) {
diff --git a/sys/dev/ixl/if_ixlv.c b/sys/dev/ixl/if_ixlv.c
index c447c34689e..608d784bfee 100644
--- a/sys/dev/ixl/if_ixlv.c
+++ b/sys/dev/ixl/if_ixlv.c
@@ -904,10 +904,7 @@ ixlv_init_locked(struct ixlv_sc *sc)
 
 		ixl_init_tx_ring(que);
 
-		if (vsi->max_frame_size <= MCLBYTES)
-			rxr->mbuf_sz = MCLBYTES;
-		else
-			rxr->mbuf_sz = MJUMPAGESIZE;
+		rxr->mbuf_sz = m_preferredsize(vsi->max_frame_size);
 		ixl_init_rx_ring(que);
 	}
 
diff --git a/sys/dev/ixl/ixl_pf_main.c b/sys/dev/ixl/ixl_pf_main.c
index d8da4cfee10..8600b0f931e 100644
--- a/sys/dev/ixl/ixl_pf_main.c
+++ b/sys/dev/ixl/ixl_pf_main.c
@@ -2067,10 +2067,7 @@ ixl_initialize_vsi(struct ixl_vsi *vsi)
 		ixl_init_tx_ring(que);
 
 		/* Next setup the HMC RX Context  */
-		if (vsi->max_frame_size <= MCLBYTES)
-			rxr->mbuf_sz = MCLBYTES;
-		else
-			rxr->mbuf_sz = MJUMPAGESIZE;
+		rxr->mbuf_sz = m_preferredsize(vsi->max_frame_size);
 
 		u16 max_rxmax = rxr->mbuf_sz * hw->func_caps.rx_buf_chain_len;
 
diff --git a/sys/kern/kern_mbuf.c b/sys/kern/kern_mbuf.c
index 0d0c1c86b16..7c10cedb075 100644
--- a/sys/kern/kern_mbuf.c
+++ b/sys/kern/kern_mbuf.c
@@ -103,6 +103,10 

Re: mbuf_jumbo_9k & iSCSI failing

2017-06-26 Thread Ben RUBSON

> On 26 Jun 2017, at 15:25, Andrey V. Elsukov  wrote:
> 
> On 26.06.2017 16:27, Ben RUBSON wrote:
>> 
>>> On 26 Jun 2017, at 15:13, Andrey V. Elsukov  wrote:
>>> 
>>> I think it is not mlxen specific problem, we have the same symptoms with
>>> ixgbe(4) driver too. To avoid the problem we have patches that are
>>> disable using of 9k mbufs, and instead only use 4k mbufs.
>> 
>> Interesting feedback Andrey, thank you !
>> The problem may be then "general".
>> So you still use large MTU (>=9000) but only allocating 4k mbufs, as a 
>> workaround ?
> 
> Yes.

Is it a kernel patch or a driver/ixgbe patch ?
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Re: mbuf_jumbo_9k & iSCSI failing

2017-06-26 Thread Andrey V. Elsukov
On 26.06.2017 16:27, Ben RUBSON wrote:
> 
>> On 26 Jun 2017, at 15:13, Andrey V. Elsukov  wrote:
>>
>> I think it is not mlxen specific problem, we have the same symptoms with
>> ixgbe(4) driver too. To avoid the problem we have patches that are
>> disable using of 9k mbufs, and instead only use 4k mbufs.
> 
> Interesting feedback Andrey, thank you !
> The problem may be then "general".
> So you still use large MTU (>=9000) but only allocating 4k mbufs, as a 
> workaround ?

Yes.

-- 
WBR, Andrey V. Elsukov



signature.asc
Description: OpenPGP digital signature


Re: mbuf_jumbo_9k & iSCSI failing

2017-06-26 Thread Ben RUBSON

> On 26 Jun 2017, at 15:13, Andrey V. Elsukov  wrote:
> 
> I think it is not mlxen specific problem, we have the same symptoms with
> ixgbe(4) driver too. To avoid the problem we have patches that are
> disable using of 9k mbufs, and instead only use 4k mbufs.

Interesting feedback Andrey, thank you !
The problem may be then "general".
So you still use large MTU (>=9000) but only allocating 4k mbufs, as a 
workaround ?
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Re: mbuf_jumbo_9k & iSCSI failing

2017-06-26 Thread Andrey V. Elsukov
On 25.06.2017 18:32, Ryan Stone wrote:
> Having looking at the original email more closely, I see that you showed an
> mlxen interface with a 9020 MTU.  Seeing allocation failures of 9k mbuf
> clusters increase while you are far below the zone's limit means that
> you're definitely running into the bug I'm describing, and this bug could
> plausibly cause the iSCSI errors that you describe.
> 
> The issue is that the newer version of the driver tries to allocate a
> single buffer to accommodate an MTU-sized packet.  Over time, however,
> memory will become fragmented and eventually it can become impossible to
> allocate a 9k physically contiguous buffer.  When this happens the driver
> is unable to allocate buffers to receive packets and is forced to drop
> them.  Presumably, if iSCSI suffers too many packet drops it will terminate
> the connection.  The older version of the driver limited itself to
> page-sized buffers, so it was immune to issues with memory fragmentation.

I think it is not mlxen specific problem, we have the same symptoms with
ixgbe(4) driver too. To avoid the problem we have patches that are
disable using of 9k mbufs, and instead only use 4k mbufs.

-- 
WBR, Andrey V. Elsukov



signature.asc
Description: OpenPGP digital signature


Re: mbuf_jumbo_9k & iSCSI failing

2017-06-26 Thread Edward Napierala
2017-06-25 16:32 GMT+01:00 Ryan Stone :

> Having looking at the original email more closely, I see that you showed an
> mlxen interface with a 9020 MTU.  Seeing allocation failures of 9k mbuf
> clusters increase while you are far below the zone's limit means that
> you're definitely running into the bug I'm describing, and this bug could
> plausibly cause the iSCSI errors that you describe.
>
> The issue is that the newer version of the driver tries to allocate a
> single buffer to accommodate an MTU-sized packet.  Over time, however,
> memory will become fragmented and eventually it can become impossible to
> allocate a 9k physically contiguous buffer.  When this happens the driver
> is unable to allocate buffers to receive packets and is forced to drop
> them.  Presumably, if iSCSI suffers too many packet drops it will terminate
> the connection.  [..]


More specifically, it will terminate the connection when there's no "ping
reply"
from the other side for the configured amount of time, which defaults to
five
seconds.  It can be changed using the kern.iscsi.ping_timeout sysctl, as
described in iscsi(4).
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"