On 09/22/17 22:33, Ben RUBSON wrote:
On 22 Sep 2017, at 20:48, Ryan Stone wrote:
Hans and I have proposed different approaches to the problem. I was
taken off this issue at $WORK for a while, but coincidentally I just
picked it up again in the last week or so. I'm working
> On 22 Sep 2017, at 20:48, Ryan Stone wrote:
>
> Hans and I have proposed different approaches to the problem. I was
> taken off this issue at $WORK for a while, but coincidentally I just
> picked it up again in the last week or so. I'm working on evaluating
> the
Hans and I have proposed different approaches to the problem. I was
taken off this issue at $WORK for a while, but coincidentally I just
picked it up again in the last week or so. I'm working on evaluating
the performance characteristics of the two approaches and once I'm
satisfied with that
> On 12 Jul 2017, at 01:02, Ryan Stone wrote:
>
> I've just put up a review that fixes mlx4_en to no longer use clusters larger
> than PAGE_SIZE in its receive path. The patch is based off of the older
> version of the driver which did the same, but keeps all of the changes
I've just put up a review that fixes mlx4_en to no longer use clusters
larger than PAGE_SIZE in its receive path. The patch is based off of the
older version of the driver which did the same, but keeps all of the
changes to the driver since then (including support for bus_dma). The
review can be
Don't forget that, generally, as I understand it, the network stack suffers
from the same problem for 9k buffers.
On Sun, Jun 25, 2017 at 12:56 PM, Ben RUBSON wrote:
> > On 25 Jun 2017, at 17:32, Ryan Stone wrote:
> >
> > Having looking at the original
On 26.06.2017 19:26, Matt Joras wrote:
> I didn't think that ixgbe(4) still suffered from this problem, and we
> use it in the same situations rstone mentioned above. Indeed, ixgbe(4)
> doesn't presently suffer from this problem (you can see that in your
> patch, as it is only effectively changing
On Mon, Jun 26, 2017 at 6:36 AM, Andrey V. Elsukov wrote:
> On 26.06.2017 16:29, Ben RUBSON wrote:
>>
>>> On 26 Jun 2017, at 15:25, Andrey V. Elsukov wrote:
>>>
>>> On 26.06.2017 16:27, Ben RUBSON wrote:
> On 26 Jun 2017, at 15:13, Andrey V. Elsukov
On Mon, Jun 26, 2017 at 03:44:58PM +0200, Julien Cigar wrote:
> On Mon, Jun 26, 2017 at 04:13:33PM +0300, Andrey V. Elsukov wrote:
> > On 25.06.2017 18:32, Ryan Stone wrote:
> > > Having looking at the original email more closely, I see that you showed
> > > an
> > > mlxen interface with a 9020
> On 26 Jun 2017, at 15:13, Andrey V. Elsukov wrote:
>
> I think it is not mlxen specific problem, we have the same symptoms with
> ixgbe(4) driver too. To avoid the problem we have patches that are
> disable using of 9k mbufs, and instead only use 4k mbufs.
Another
On Mon, Jun 26, 2017 at 04:13:33PM +0300, Andrey V. Elsukov wrote:
> On 25.06.2017 18:32, Ryan Stone wrote:
> > Having looking at the original email more closely, I see that you showed an
> > mlxen interface with a 9020 MTU. Seeing allocation failures of 9k mbuf
> > clusters increase while you
> On 26 Jun 2017, at 15:36, Andrey V. Elsukov wrote:
>
> On 26.06.2017 16:29, Ben RUBSON wrote:
>>
>>> On 26 Jun 2017, at 15:25, Andrey V. Elsukov wrote:
>>>
>>> On 26.06.2017 16:27, Ben RUBSON wrote:
> On 26 Jun 2017, at 15:13, Andrey V.
On 26.06.2017 16:29, Ben RUBSON wrote:
>
>> On 26 Jun 2017, at 15:25, Andrey V. Elsukov wrote:
>>
>> On 26.06.2017 16:27, Ben RUBSON wrote:
>>>
On 26 Jun 2017, at 15:13, Andrey V. Elsukov wrote:
I think it is not mlxen specific problem, we
> On 26 Jun 2017, at 15:25, Andrey V. Elsukov wrote:
>
> On 26.06.2017 16:27, Ben RUBSON wrote:
>>
>>> On 26 Jun 2017, at 15:13, Andrey V. Elsukov wrote:
>>>
>>> I think it is not mlxen specific problem, we have the same symptoms with
>>> ixgbe(4) driver
On 26.06.2017 16:27, Ben RUBSON wrote:
>
>> On 26 Jun 2017, at 15:13, Andrey V. Elsukov wrote:
>>
>> I think it is not mlxen specific problem, we have the same symptoms with
>> ixgbe(4) driver too. To avoid the problem we have patches that are
>> disable using of 9k mbufs, and
> On 26 Jun 2017, at 15:13, Andrey V. Elsukov wrote:
>
> I think it is not mlxen specific problem, we have the same symptoms with
> ixgbe(4) driver too. To avoid the problem we have patches that are
> disable using of 9k mbufs, and instead only use 4k mbufs.
Interesting
On 25.06.2017 18:32, Ryan Stone wrote:
> Having looking at the original email more closely, I see that you showed an
> mlxen interface with a 9020 MTU. Seeing allocation failures of 9k mbuf
> clusters increase while you are far below the zone's limit means that
> you're definitely running into
2017-06-25 16:32 GMT+01:00 Ryan Stone :
> Having looking at the original email more closely, I see that you showed an
> mlxen interface with a 9020 MTU. Seeing allocation failures of 9k mbuf
> clusters increase while you are far below the zone's limit means that
> you're
> On 25 Jun 2017, at 17:32, Ryan Stone wrote:
>
> Having looking at the original email more closely, I see that you showed an
> mlxen interface with a 9020 MTU. Seeing allocation failures of 9k mbuf
> clusters increase while you are far below the zone's limit means that
Having looking at the original email more closely, I see that you showed an
mlxen interface with a 9020 MTU. Seeing allocation failures of 9k mbuf
clusters increase while you are far below the zone's limit means that
you're definitely running into the bug I'm describing, and this bug could
> On 25 Jun 2017, at 17:14, Ryan Stone wrote:
>
> Is this setup using the mlx4_en driver? If so, recent versions of that
> driver has a regression when using MTUs greater than the page size (4096 on
> i386/amd64). The bug will cause the card to drop packets when the system
Is this setup using the mlx4_en driver? If so, recent versions of that
driver has a regression when using MTUs greater than the page size (4096 on
i386/amd64). The bug will cause the card to drop packets when the system
is under memory pressure, and in certain causes the card can get into a
> On 30 Dec 2016, at 22:55, Ben RUBSON wrote:
>
> Hello,
>
> 2 FreeBSD 11.0-p3 servers, one iSCSI initiator, one target.
> Both with Mellanox ConnectX-3 40G.
>
> Since a few days, sometimes, under undetermined circumstances, as soon as
> there is some (very low) iSCSI
23 matches
Mail list logo