On Thu, Dec 10, 2015 at 2:18 PM, Eggert, Lars wrote:
> On 2015-10-26, at 18:40, Eggert, Lars wrote:
> > On 2015-10-26, at 17:08, Pieper, Jeffrey E
> wrote:
> >> As a caveat, this was using default netperf message sizes.
> >
> > I
On 10 December 2015 at 10:29, Denis Pearson wrote:
> On Thu, Dec 10, 2015 at 2:18 PM, Eggert, Lars wrote:
>
>> On 2015-10-26, at 18:40, Eggert, Lars wrote:
>> > On 2015-10-26, at 17:08, Pieper, Jeffrey E
>>
On Thu, Dec 10, 2015 at 4:40 PM, Adrian Chadd
wrote:
> On 10 December 2015 at 10:29, Denis Pearson
> wrote:
> > On Thu, Dec 10, 2015 at 2:18 PM, Eggert, Lars wrote:
> >
> >> On 2015-10-26, at 18:40, Eggert, Lars
On Thu, Dec 10, 2015 at 10:40 AM, Adrian Chadd wrote:
> On 10 December 2015 at 10:29, Denis Pearson wrote:
>> On Thu, Dec 10, 2015 at 2:18 PM, Eggert, Lars wrote:
>>
>>> On 2015-10-26, at 18:40, Eggert, Lars
[snip]
If RSS works fine on the latest driver then great.
This was with single queue netperf, right?
-a
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to
Hi,
On 2015-12-10, at 20:42, Denis Pearson wrote:
> I can probably find out a snapshot with the code at the time and extract a
> diff, yes. I just don't know how it worths wasting the time when the problem
> is not reproducible on the current 1.4.8 driver which will
On 2015-10-26, at 18:40, Eggert, Lars wrote:
> On 2015-10-26, at 17:08, Pieper, Jeffrey E wrote:
>> As a caveat, this was using default netperf message sizes.
>
> I get the same ~3 Gb/s with the default netperf sizes and driver 1.4.5.
Now there is
On 2015-10-26, at 4:38, Kevin Oberman wrote:
> On Sun, Oct 25, 2015 at 12:10 AM, Daniel Engberg <
> daniel.engberg.li...@pyret.net> wrote:
>
>> One thing I've noticed that probably affects your performance benchmarks
>> somewhat is that you're using iperf(2) instead of the
On 2015-10-26, at 15:38, Pieper, Jeffrey E wrote:
> With the latest ixl component from:
> https://downloadcenter.intel.com/download/25160/Network-Adapter-Driver-for-PCI-E-40-Gigabit-Network-Connections-under-FreeBSD-
>
> running on 10.2 amd64, I easily get 9.6 Gb/s
Subject: Re: ixl 40G bad performance?
On 2015-10-26, at 4:38, Kevin Oberman <rkober...@gmail.com> wrote:
> On Sun, Oct 25, 2015 at 12:10 AM, Daniel Engberg <
> daniel.engberg.li...@pyret.net> wrote:
>
>> One thing I've noticed that probably affects your performance benc
;
Subject: Re: ixl 40G bad performance?
On 2015-10-26, at 15:38, Pieper, Jeffrey E <jeffrey.e.pie...@intel.com> wrote:
> With the latest ixl component from:
> https://downloadcenter.intel.com/download/25160/Network-Adapter-Driver-for-PCI-E-40-Gigabit-Network-Connections-under-FreeBSD-
On 2015-10-26, at 17:08, Pieper, Jeffrey E wrote:
> As a caveat, this was using default netperf message sizes.
I get the same ~3 Gb/s with the default netperf sizes and driver 1.4.5.
When you tcpdump during the run, do you see TSO/LRO in effect, i.e., do you see
On Sun, Oct 25, 2015 at 12:10 AM, Daniel Engberg <
daniel.engberg.li...@pyret.net> wrote:
> One thing I've noticed that probably affects your performance benchmarks
> somewhat is that you're using iperf(2) instead of the newer iperf3 but I
> could be wrong...
>
> Best regards,
> Daniel
>
iperf3
On 2015-10-23, at 23:36, Eric Joyner wrote:
> I see that the sysctl does clobber the global value, but have you tried
> lowering the interval / raising the rate? You could try something like
> 10usecs, and see if that helps. We'll do some more investigation here --
> 3Gb/s on
13 on a 40G interface?? I don't think that's very good for Linux either, is
this a 4x10 adapter?
Maybe elaborating on the details of the hardware, you sure you don't have a
bad PCI slot
somewhere that might be throttling everything?
Cheers,
Jack
On Sat, Oct 24, 2015 at 12:43 AM, Eggert, Lars
On 2015-10-24, at 10:32, Jack Vogel wrote:
> 13 on a 40G interface?? I don't think that's very good for Linux either, is
> this a 4x10 adapter?
No, its's a 2x40. And I can get it into the high 30s with tuning. I just
mentioned the value to illustrate that something seems to
Bruce mostly has it right -- ITR is the minimum latency between interrupts.
But, it does actually guarantee a minimum period between interrupts.
Though, Fortville actually is unique a little bit in that there is another
ITR setting that can ensure a certain average number of interrupts per
second
Hi,
for those of you following along, I did try jumbograms and throughput increases
roughly 5x. So it looks like I'm hitting a packet-rate limit somewhere.
Lars
signature.asc
Description: Message signed with OpenPGP using GPGMail
On Wed, 21 Oct 2015, Bruce Evans wrote:
Fix for em:
X diff -u2 if_em.c~ if_em.c
X --- if_em.c~ 2015-09-28 06:29:35.0 +
X +++ if_em.c 2015-10-18 18:49:36.876699000 +
X @@ -609,8 +609,8 @@
X em_tx_abs_int_delay_dflt);
X em_add_int_delay_sysctl(adapter, "itr",
X
On 2015-10-22, at 9:38, Eggert, Lars wrote:
> for those of you following along, I did try jumbograms and throughput
> increases roughly 5x. So it looks like I'm hitting a packet-rate limit
> somewhere.
Does the ixl driver have an issue with TSO/LRO?
If I tcpdump on the
The 40G hardware is absolutely dependent on firmware, if you have a mismatch
for instance, it can totally bork things. So, I would work with your Intel
rep and be
sure you have the correct version for your specific hardware.
Good luck,
Jack
On Wed, Oct 21, 2015 at 5:25 AM, Eggert, Lars
Hi Jack,
On 2015-10-21, at 16:14, Jack Vogel wrote:
> The 40G hardware is absolutely dependent on firmware, if you have a mismatch
> for instance, it can totally bork things. So, I would work with your Intel
> rep and be sure you have the correct version for your specific
+ Eric from Intel
(Also trimming the CC list as it wouldn't let me send the message
otherwise.)
On 10/21/15 at 02:59P, Eggert, Lars wrote:
> Hi Jack,
>
> On 2015-10-21, at 16:14, Jack Vogel wrote:
> > The 40G hardware is absolutely dependent on firmware, if you have a
Hi Bruce,
thanks for the very detailed analysis of the ixl sysctls!
On 2015-10-20, at 16:51, Bruce Evans wrote:
>
> Lowering (improving) latency always lowers (unimproves) throughput by
> increasing load.
That, I also understand. But even when I back off the itr values
Hi,
On 2015-10-20, at 10:24, Ian Smith wrote:
> Actually, you want to set hw.acpi.cpu.cx_lowest=C1 instead.
Done.
On 2015-10-19, at 17:55, Luigi Rizzo wrote:
> On Mon, Oct 19, 2015 at 8:34 AM, Eggert, Lars wrote:
>> The only other
On Mon, 19 Oct 2015 21:47:36 -0700, Kevin Oberman wrote:
> > I suspect it might not touch the c states, but better check. The safest is
> > disable them in the bios.
> >
>
> To disable C-States:
> sysctl dev.cpu.0.cx_lowest=C1
Actually, you want to set hw.acpi.cpu.cx_lowest=C1 instead.
On Tue, 20 Oct 2015, Eggert, Lars wrote:
Hi,
On 2015-10-20, at 10:24, Ian Smith wrote:
Actually, you want to set hw.acpi.cpu.cx_lowest=C1 instead.
Done.
On 2015-10-19, at 17:55, Luigi Rizzo wrote:
On Mon, Oct 19, 2015 at 8:34 AM, Eggert, Lars
i would look at the following:
- c states and clock speed - make sure you never go below C1,
and fix the clock speed to max.
Sure these parameters also affect the 10G card, but there
may be strange interaction that trigger the power saving
modes in different ways
- interrupt moderation
Hi,
On 2015-10-19, at 16:20, Luigi Rizzo wrote:
>
> i would look at the following:
> - c states and clock speed - make sure you never go below C1,
> and fix the clock speed to max.
> Sure these parameters also affect the 10G card, but there
> may be strange interaction
On Monday, October 19, 2015, Eggert, Lars wrote:
> Hi,
>
> On 2015-10-19, at 16:20, Luigi Rizzo >
> wrote:
> >
> > i would look at the following:
> > - c states and clock speed - make sure you never go below C1,
> > and fix the clock speed to
Hi,
in order to eliminate network or hardware weirdness, I've rerun the test with
Linux 4.3rc6, where I get 13.1 Gbits/sec throughput and 52 usec flood ping
latency. Not great either, but in line with earlier experiments with Mellanox
NICs and an untuned Linux system.
On 2015-10-19, at 17:11,
On Mon, Oct 19, 2015 at 8:34 AM, Eggert, Lars wrote:
> Hi,
>
> in order to eliminate network or hardware weirdness, I've rerun the test with
> Linux 4.3rc6, where I get 13.1 Gbits/sec throughput and 52 usec flood ping
> latency. Not great either, but in line with earlier
On Mon, Oct 19, 2015 at 8:11 AM, Luigi Rizzo wrote:
> On Monday, October 19, 2015, Eggert, Lars wrote:
>
> > Hi,
> >
> > On 2015-10-19, at 16:20, Luigi Rizzo >
> > wrote:
> > >
> > > i would look at the following:
> > > - c
On 10/19/15 at 08:11P, Luigi Rizzo wrote:
> On Monday, October 19, 2015, Eggert, Lars wrote:
>
> >
> > How do I turn off flow director?
>
>
> I am not sure if it is enabled I'm FreeBSD. It is in linux and almost
> halves the pkt rate with netmap (from 35 down to 19mpps).
>
34 matches
Mail list logo