Re: ixl 40G bad performance?

2015-10-21 Thread Jack Vogel
The 40G hardware is absolutely dependent on firmware, if you have a mismatch
for instance, it can totally bork things. So, I would work with your Intel
rep and be
sure you have the correct version for your specific hardware.

Good luck,

Jack


On Wed, Oct 21, 2015 at 5:25 AM, Eggert, Lars  wrote:

> Hi Bruce,
>
> thanks for the very detailed analysis of the ixl sysctls!
>
> On 2015-10-20, at 16:51, Bruce Evans  wrote:
> >
> > Lowering (improving) latency always lowers (unimproves) throughput by
> > increasing load.
>
> That, I also understand. But even when I back off the itr values to
> something more reasonable, throughput still remains low.
>
> With all the tweaking I have tried, I have yet to top 3 Gb/s with ixl
> cards, whereas they do ~13 Gb/s on Linux straight out of the box.
>
> Lars
>
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Re: ixl 40G bad performance?

2015-10-21 Thread Eggert, Lars
Hi Jack,

On 2015-10-21, at 16:14, Jack Vogel  wrote:
> The 40G hardware is absolutely dependent on firmware, if you have a mismatch
> for instance, it can totally bork things. So, I would work with your Intel
> rep and be sure you have the correct version for your specific hardware.

I got these tester cards from Amazon, so I don't have a rep.

I flashed the latest NVM (1.2.5), because previously the FreeBSD driver was 
complaining about the firmware being too old. But I did that before the 
experiments.

If there is anything else I should be doing, I'd appreciate being put in 
contact with someone at Intel who can help.

Thanks,
Lars
Lars


signature.asc
Description: Message signed with OpenPGP using GPGMail


sysctl and signed net.bpf.maxbufsize variable

2015-10-21 Thread elof2


Isn't this a bug?

# sysctl net.bpf.maxbufsize=30
net.bpf.maxbufsize: 524288 -> -1294967296

No error message and exit status is 0.


Shouldn't net.bpf.maxbufsize be unsigned?



I would like sysctl to have a crude sanity control and return an error if 
you set a positive value but the result becomes negative.



...and also have some specific sanity control to test if you try to set a 
value wy out of bounds (like setting net.bpf.maxbufsize to a value greater 
than the maximum RAM).


/Elof
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Re: ixl 40G bad performance?

2015-10-21 Thread hiren panchasara
+ Eric from Intel
(Also trimming the CC list as it wouldn't let me send the message
otherwise.)

On 10/21/15 at 02:59P, Eggert, Lars wrote:
> Hi Jack,
> 
> On 2015-10-21, at 16:14, Jack Vogel  wrote:
> > The 40G hardware is absolutely dependent on firmware, if you have a mismatch
> > for instance, it can totally bork things. So, I would work with your Intel
> > rep and be sure you have the correct version for your specific hardware.
> 
> I got these tester cards from Amazon, so I don't have a rep.
> 
> I flashed the latest NVM (1.2.5), because previously the FreeBSD driver was 
> complaining about the firmware being too old. But I did that before the 
> experiments.
> 
> If there is anything else I should be doing, I'd appreciate being put in 
> contact with someone at Intel who can help.

Eric,

Can you think of anything else that could explain this low performance?

Cheers,
Hiren


pgphg9v_JoGTw.pgp
Description: PGP signature


[Bug 193579] [axge] axge driver issue with tcp checksum offload with pf nat

2015-10-21 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=193579

--- Comment #5 from commit-h...@freebsd.org ---
A commit references this bug:

Author: kp
Date: Wed Oct 21 15:32:21 UTC 2015
New revision: 289703
URL: https://svnweb.freebsd.org/changeset/base/289703

Log:
  MFC r289316:

  pf: Fix TSO issues

  In certain configurations (mostly but not exclusively as a VM on Xen) pf
  produced packets with an invalid TCP checksum.

  The problem was that pf could only handle packets with a full checksum. The
  FreeBSD IP stack produces TCP packets with a pseudo-header checksum (only
  addresses, length and protocol).
  Certain network interfaces expect to see the pseudo-header checksum, so they
  end up producing packets with invalid checksums.

  To fix this stop calculating the full checksum and teach pf to only update
TCP
  checksums if TSO is disabled or the change affects the pseudo-header
checksum.

  PR: 154428, 193579, 198868
  Relnotes:   yes
  Sponsored by:   RootBSD

Changes:
_U  stable/10/
  stable/10/sys/net/pfvar.h
  stable/10/sys/netpfil/pf/pf.c
  stable/10/sys/netpfil/pf/pf_ioctl.c
  stable/10/sys/netpfil/pf/pf_norm.c

-- 
You are receiving this mail because:
You are the assignee for the bug.
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Re: Some MSI are not routed correctly

2015-10-21 Thread John Baldwin
On Wednesday, October 21, 2015 11:29:17 AM Maxim Sobolev wrote:
> Yes, I do. However, please note that for some reason they are not using
> nearly as much CPU time as the other 4 for some reason.
> 
>11 root  -92- 0K  1104K WAIT3  95.3H  28.96%
> intr{irq267: igb0:que}
>11 root  -92- 0K  1104K WAIT1  95.5H  24.41%
> intr{irq265: igb0:que}
>11 root  -92- 0K  1104K CPU22  95.2H  23.73%
> intr{irq266: igb0:que}
>11 root  -92- 0K  1104K WAIT0  95.2H  23.05%
> intr{irq264: igb0:que}
>11 root  -92- 0K  1104K WAIT6 286:37   1.12%
> intr{irq271: igb1:que}
>11 root  -92- 0K  1104K WAIT7 278:05   1.12%
> intr{irq272: igb1:que}
>11 root  -92- 0K  1104K WAIT5 284:26   1.07%
> intr{irq270: igb1:que}
>11 root  -92- 0K  1104K WAIT4 290:41   0.98%
> intr{irq269: igb1:que}
> 
> CPU 0:   0.0% user,  0.0% nice,  0.9% system, 24.9% interrupt, 74.2% idle
> CPU 1:   0.5% user,  0.0% nice,  0.0% system, 26.3% interrupt, 73.2% idle
> CPU 2:   0.0% user,  0.0% nice,  1.4% system, 25.4% interrupt, 73.2% idle
> CPU 3:   0.0% user,  0.0% nice,  0.5% system, 23.9% interrupt, 75.6% idle
> CPU 4:   0.9% user,  0.0% nice,  2.3% system,  2.3% interrupt, 94.4% idle
> CPU 5:   1.4% user,  0.0% nice,  4.2% system,  4.2% interrupt, 90.1% idle
> CPU 6:   1.4% user,  0.0% nice,  3.8% system,  1.4% interrupt, 93.4% idle
> CPU 7:   2.8% user,  0.0% nice,  0.0% system,  3.8% interrupt, 93.4% idle
> 
> 34263 igb0:que 0
> 32308 igb0:que 1
> 35022 igb0:que 2
> 34593 igb0:que 3
> 14931 igb1:que 0
> 13059 igb1:que 1
> 12971 igb1:que 2
> 13032 igb1:que 3
> 
> So I guess interrupts are routed correctly after all, but for some reason
> driver takes some 5 times less time to process it on cpus 4-7
> (per-interrupt). Weird.

Are the pps rates the same?  It seems like the interrupt rates on igb0
are double those of igb1?

-- 
John Baldwin
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Re: Some MSI are not routed correctly

2015-10-21 Thread Maxim Sobolev
Yes, I do. However, please note that for some reason they are not using
nearly as much CPU time as the other 4 for some reason.

   11 root  -92- 0K  1104K WAIT3  95.3H  28.96%
intr{irq267: igb0:que}
   11 root  -92- 0K  1104K WAIT1  95.5H  24.41%
intr{irq265: igb0:que}
   11 root  -92- 0K  1104K CPU22  95.2H  23.73%
intr{irq266: igb0:que}
   11 root  -92- 0K  1104K WAIT0  95.2H  23.05%
intr{irq264: igb0:que}
   11 root  -92- 0K  1104K WAIT6 286:37   1.12%
intr{irq271: igb1:que}
   11 root  -92- 0K  1104K WAIT7 278:05   1.12%
intr{irq272: igb1:que}
   11 root  -92- 0K  1104K WAIT5 284:26   1.07%
intr{irq270: igb1:que}
   11 root  -92- 0K  1104K WAIT4 290:41   0.98%
intr{irq269: igb1:que}

CPU 0:   0.0% user,  0.0% nice,  0.9% system, 24.9% interrupt, 74.2% idle
CPU 1:   0.5% user,  0.0% nice,  0.0% system, 26.3% interrupt, 73.2% idle
CPU 2:   0.0% user,  0.0% nice,  1.4% system, 25.4% interrupt, 73.2% idle
CPU 3:   0.0% user,  0.0% nice,  0.5% system, 23.9% interrupt, 75.6% idle
CPU 4:   0.9% user,  0.0% nice,  2.3% system,  2.3% interrupt, 94.4% idle
CPU 5:   1.4% user,  0.0% nice,  4.2% system,  4.2% interrupt, 90.1% idle
CPU 6:   1.4% user,  0.0% nice,  3.8% system,  1.4% interrupt, 93.4% idle
CPU 7:   2.8% user,  0.0% nice,  0.0% system,  3.8% interrupt, 93.4% idle

34263 igb0:que 0
32308 igb0:que 1
35022 igb0:que 2
34593 igb0:que 3
14931 igb1:que 0
13059 igb1:que 1
12971 igb1:que 2
13032 igb1:que 3

So I guess interrupts are routed correctly after all, but for some reason
driver takes some 5 times less time to process it on cpus 4-7
(per-interrupt). Weird.

On Wed, Oct 21, 2015 at 10:41 AM, John Baldwin  wrote:

> On Tuesday, October 20, 2015 06:31:47 PM Maxim Sobolev wrote:
> > Here you go:
> >
> > $ sudo procstat -S 11
> >   PIDTID COMM TDNAME   CPU CSID CPU MASK
> >11 100082 intr irq269: igb1:que   41 4
> >11 100084 intr irq270: igb1:que   51 5
> >11 100086 intr irq271: igb1:que   61 6
> >11 100088 intr irq272: igb1:que   71 7
>
> These are clearly what you want, and you can see that the last CPU they
> ran on is the CPU you want as well.  If you run 'top -SHz' do you see
> the threads running on other CPUs?
>
> --
> John Baldwin
>
>


-- 
Maksym Sobolyev
Sippy Software, Inc.
Internet Telephony (VoIP) Experts
Tel (Canada): +1-778-783-0474
Tel (Toll-Free): +1-855-747-7779
Fax: +1-866-857-6942
Web: http://www.sippysoft.com
MSN: sa...@sippysoft.com
Skype: SippySoft
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Re: Some MSI are not routed correctly

2015-10-21 Thread Maxim Sobolev
Oh, bingo! Just checked packets count and found the following:

$ sysctl -a | grep dev.igb.1.queue | grep packets
dev.igb.1.queue3.rx_packets: 2997
dev.igb.1.queue3.tx_packets: 21045801676
dev.igb.1.queue2.rx_packets: 3084
dev.igb.1.queue2.tx_packets: 21265692009
dev.igb.1.queue1.rx_packets: 3016
dev.igb.1.queue1.tx_packets: 21496134503
dev.igb.1.queue0.rx_packets: 48868
dev.igb.1.queue0.tx_packets: 21729900371
$ sysctl -a | grep dev.igb.0.queue | grep packets
dev.igb.0.queue3.rx_packets: 40760861870
dev.igb.0.queue3.tx_packets: 21068449957
dev.igb.0.queue2.rx_packets: 40724698310
dev.igb.0.queue2.tx_packets: 21288469372
dev.igb.0.queue1.rx_packets: 40739376158
dev.igb.0.queue1.tx_packets: 21519768656
dev.igb.0.queue0.rx_packets: 40602824141
dev.igb.0.queue0.tx_packets: 21754065014

Apparently all incoming packets are going through igb0, while outbound get
distributed. This means the upstream switch is not doing proper load
balancing between two ports. We'll take it to the DC to fix.

Thanks John, for helping to drill that down!

On Wed, Oct 21, 2015 at 11:31 AM, John Baldwin  wrote:

> On Wednesday, October 21, 2015 11:29:17 AM Maxim Sobolev wrote:
> > Yes, I do. However, please note that for some reason they are not using
> > nearly as much CPU time as the other 4 for some reason.
> >
> >11 root  -92- 0K  1104K WAIT3  95.3H  28.96%
> > intr{irq267: igb0:que}
> >11 root  -92- 0K  1104K WAIT1  95.5H  24.41%
> > intr{irq265: igb0:que}
> >11 root  -92- 0K  1104K CPU22  95.2H  23.73%
> > intr{irq266: igb0:que}
> >11 root  -92- 0K  1104K WAIT0  95.2H  23.05%
> > intr{irq264: igb0:que}
> >11 root  -92- 0K  1104K WAIT6 286:37   1.12%
> > intr{irq271: igb1:que}
> >11 root  -92- 0K  1104K WAIT7 278:05   1.12%
> > intr{irq272: igb1:que}
> >11 root  -92- 0K  1104K WAIT5 284:26   1.07%
> > intr{irq270: igb1:que}
> >11 root  -92- 0K  1104K WAIT4 290:41   0.98%
> > intr{irq269: igb1:que}
> >
> > CPU 0:   0.0% user,  0.0% nice,  0.9% system, 24.9% interrupt, 74.2% idle
> > CPU 1:   0.5% user,  0.0% nice,  0.0% system, 26.3% interrupt, 73.2% idle
> > CPU 2:   0.0% user,  0.0% nice,  1.4% system, 25.4% interrupt, 73.2% idle
> > CPU 3:   0.0% user,  0.0% nice,  0.5% system, 23.9% interrupt, 75.6% idle
> > CPU 4:   0.9% user,  0.0% nice,  2.3% system,  2.3% interrupt, 94.4% idle
> > CPU 5:   1.4% user,  0.0% nice,  4.2% system,  4.2% interrupt, 90.1% idle
> > CPU 6:   1.4% user,  0.0% nice,  3.8% system,  1.4% interrupt, 93.4% idle
> > CPU 7:   2.8% user,  0.0% nice,  0.0% system,  3.8% interrupt, 93.4% idle
> >
> > 34263 igb0:que 0
> > 32308 igb0:que 1
> > 35022 igb0:que 2
> > 34593 igb0:que 3
> > 14931 igb1:que 0
> > 13059 igb1:que 1
> > 12971 igb1:que 2
> > 13032 igb1:que 3
> >
> > So I guess interrupts are routed correctly after all, but for some reason
> > driver takes some 5 times less time to process it on cpus 4-7
> > (per-interrupt). Weird.
>
> Are the pps rates the same?  It seems like the interrupt rates on igb0
> are double those of igb1?
>
> --
> John Baldwin
>
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Re: Some MSI are not routed correctly

2015-10-21 Thread John Baldwin
On Tuesday, October 20, 2015 06:31:47 PM Maxim Sobolev wrote:
> Here you go:
> 
> $ sudo procstat -S 11
>   PIDTID COMM TDNAME   CPU CSID CPU MASK
>11 100082 intr irq269: igb1:que   41 4
>11 100084 intr irq270: igb1:que   51 5
>11 100086 intr irq271: igb1:que   61 6
>11 100088 intr irq272: igb1:que   71 7

These are clearly what you want, and you can see that the last CPU they
ran on is the CPU you want as well.  If you run 'top -SHz' do you see
the threads running on other CPUs?

-- 
John Baldwin
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


[Bug 203916] ethernet and wlan interfaces both have the same mac-address after upgrade to r289486

2015-10-21 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=203916

Mark Linimon  changed:

   What|Removed |Added

   Assignee|freebsd-b...@freebsd.org|freebsd-net@FreeBSD.org
   Keywords||regression

-- 
You are receiving this mail because:
You are the assignee for the bug.
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Re: ixl 40G bad performance?

2015-10-21 Thread Eggert, Lars
Hi Bruce,

thanks for the very detailed analysis of the ixl sysctls!

On 2015-10-20, at 16:51, Bruce Evans  wrote:
> 
> Lowering (improving) latency always lowers (unimproves) throughput by
> increasing load.

That, I also understand. But even when I back off the itr values to something 
more reasonable, throughput still remains low.

With all the tweaking I have tried, I have yet to top 3 Gb/s with ixl cards, 
whereas they do ~13 Gb/s on Linux straight out of the box.

Lars


signature.asc
Description: Message signed with OpenPGP using GPGMail


[Bug 203922] The kern.ipc.acceptqueue limit is too low

2015-10-21 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=203922

Garrett Cooper,425-314-3911  changed:

   What|Removed |Added

 CC||n...@freebsd.org
   Keywords||patch
   Assignee|freebsd-b...@freebsd.org|freebsd-net@FreeBSD.org

-- 
You are receiving this mail because:
You are the assignee for the bug.
___
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"