Source upgrade to 10.3: Undefined symbol "__set_error_selector"

2016-09-22 Thread Chris Stankevitz

FYI (issue is resolved so I'm just reporting for posterity)...

I have four offline ("air gapped") FreeBSD systems with nearly identical 
hardware.  Two started life as 10.1-RELEASE and the other two started 
life as 10.2-RELEASE.


All are kept up to date by bringing over /usr/src for their associated 
releases (using 'svn co https://svn.freebsd.org/base/releng/10.x') and 
make/buildworld.


All were upgraded to 10.3-p5 and failed 'make installworld' with:

/lib/libthr.so.3: Undefined symbol "__set_error_selector"

All resolved with 'cd /usr/src/lib/libc && make install && cd /usr/src 
&& make installworld'


Chris
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: svn commit: r306207 - releng/11.0/release/doc/en_US.ISO8859-1/relnotes

2016-09-22 Thread Mark Millard
https://svnweb.freebsd.org/base/releng/11.0/release/doc/en_US.ISO8859-1/relnotes/article.xml?revision=306207&view=markup

says. . .


>   The
> 
> WITH_SYSTEM_COMPILER &man.src.conf.5;
> 
> option is enabled by default.

but. . .

> Author: bdrewery
> Date: Wed Sep 21 21:23:09 2016
> New Revision: 306143
> URL: https://svnweb.freebsd.org/changeset/base/306143
> 
> Log:
>   Disable SYSTEM_COMPILER by default.
>   
>   This is a direct commit to releng/11.0.
>   
>   Having it enabled can lead to a situation where building
>   on one system and installing on another will fail due
>   to not finding cc in the OBJDIR.
>   
>   An actual fix will be made on head separately.
>   
>   PR: 212877
>   Relnotes:   yes
>   Sponsored by:   Dell EMC Isilon
>   Approved by:re (gjb)

===
Mark Millard
markmi at dsl-only.net

___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: svn commit: r306207 - releng/11.0/release/doc/en_US.ISO8859-1/relnotes

2016-09-22 Thread Glen Barber
Bah.

I'll add this to the errata.html page that it was turned off.  It's my
fault.

Glen

On Thu, Sep 22, 2016 at 12:30:40PM -0700, Mark Millard wrote:
> https://svnweb.freebsd.org/base/releng/11.0/release/doc/en_US.ISO8859-1/relnotes/article.xml?revision=306207&view=markup
> 
> says. . .
> 
> 
> >   The
> > 
> > WITH_SYSTEM_COMPILER &man.src.conf.5;
> > 
> > option is enabled by default.
> 
> but. . .
> 
> > Author: bdrewery
> > Date: Wed Sep 21 21:23:09 2016
> > New Revision: 306143
> > URL: https://svnweb.freebsd.org/changeset/base/306143
> > 
> > Log:
> >   Disable SYSTEM_COMPILER by default.
> >   
> >   This is a direct commit to releng/11.0.
> >   
> >   Having it enabled can lead to a situation where building
> >   on one system and installing on another will fail due
> >   to not finding cc in the OBJDIR.
> >   
> >   An actual fix will be made on head separately.
> >   
> >   PR:   212877
> >   Relnotes: yes
> >   Sponsored by: Dell EMC Isilon
> >   Approved by:  re (gjb)
> 
> ===
> Mark Millard
> markmi at dsl-only.net
> 


signature.asc
Description: PGP signature


Re: zvol clone diffs

2016-09-22 Thread Dean E. Weimer

On 2016-09-22 9:38 am, Slawa Olhovchenkov wrote:

On Thu, Sep 22, 2016 at 04:56:53PM +0500, Eugene M. Zheganin wrote:


Hi.

I should mention from the start that this is a question about an
engineering task, not a question about FreeBSD issue.

I have a set of zvol clones that I redistribute over iSCSI. Several
Windows VMs use these clones as disks via their embedded iSCSI
initiators (each clone represents a disk with an NTFS partition, is
imported as a "foreign" disk and functions just fine). From my 
opinion,

they should not have any need to do additional writes on these clones
(each VM should only read data, from my point of view). But zfs shows
they do, and sometimes they write a lot of data, so clearly facts and
expactations differ a lot - obviously I didn't take something into
accounting.


May be atime like on NTFS?

http://serverfault.com/questions/33932/how-do-you-disable-the-last-accessed-attribute-on-ntfs-windows


I would recommend using the windows Diskpart command and settings the 
volumes attribute to read only, this will force the NTFS volume to be 
readonly and shouldn't allow changes to be saved.


--
Thanks,
   Dean E. Weimer
   http://www.dweimer.net/
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: zvol clone diffs

2016-09-22 Thread Slawa Olhovchenkov
On Thu, Sep 22, 2016 at 04:56:53PM +0500, Eugene M. Zheganin wrote:

> Hi.
> 
> I should mention from the start that this is a question about an
> engineering task, not a question about FreeBSD issue.
> 
> I have a set of zvol clones that I redistribute over iSCSI. Several
> Windows VMs use these clones as disks via their embedded iSCSI
> initiators (each clone represents a disk with an NTFS partition, is
> imported as a "foreign" disk and functions just fine). From my opinion,
> they should not have any need to do additional writes on these clones
> (each VM should only read data, from my point of view). But zfs shows
> they do, and sometimes they write a lot of data, so clearly facts and
> expactations differ a lot - obviously I didn't take something into
> accounting.

May be atime like on NTFS?

http://serverfault.com/questions/33932/how-do-you-disable-the-last-accessed-attribute-on-ntfs-windows
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Jenkins build is back to stable : FreeBSD_stable_10 #407

2016-09-22 Thread jenkins-admin
See 

___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: zvol clone diffs

2016-09-22 Thread Matthew Seaman
On 22/09/2016 13:56, Eugene M. Zheganin wrote:
> Is there any way to figure out what these writes are ? Because I cannot
> propose any simple enough method.

Given you're using volumes for datasets where ZFS knows nothing about
the contained filesystem structure, about the only way to proceed is via
the windows site of things.  You'ld need to somehow trap where windows
issues a write and proceed from there.  Ideally you could do something
like snapshot NTFS, wait until windows has written something and then
compare the snapshot with the live filesystem.  Very cursory Googling
suggests that Microsoft calls this sort of thing a 'shadow copy'

Cheers,

Matthew








signature.asc
Description: OpenPGP digital signature


Re: zfs/raidz and creation pause/blocking

2016-09-22 Thread Steven Hartland
Almost certainly its TRIMing the drives try setting the sysctl 
vfs.zfs.vdev.trim_on_init=0


On 22/09/2016 12:54, Eugene M. Zheganin wrote:

Hi.

Recently I spent a lot of time setting up various zfs installations, and
I got a question.
Often when creating a raidz on disks considerably big (>~ 1T) I'm seeing
a weird stuff: "zpool create" blocks, and waits for several minutes. In
the same time system is fully responsive and I can see in gstat that the
kernel starts to tamper all the pool candidates sequentially at 100%
busy with iops around zero (in the example below, taken from a live
system, it's doing something with da11):

(zpool create gamestop raidz da5 da7 da8 da9 da10 da11)

dT: 1.064s  w: 1.000s
  L(q)  ops/sr/s   kBps   ms/rw/s   kBps   ms/w   %busy Name
 0  0  0  00.0  0  00.00.0| da0
 0  0  0  00.0  0  00.00.0| da1
 0  0  0  00.0  0  00.00.0| da2
 0  0  0  00.0  0  00.00.0| da3
 0  0  0  00.0  0  00.00.0| da4
 0  0  0  00.0  0  00.00.0| da5
 0  0  0  00.0  0  00.00.0| da6
 0  0  0  00.0  0  00.00.0| da7
 0  0  0  00.0  0  00.00.0| da8
 0  0  0  00.0  0  00.00.0| da9
 0  0  0  00.0  0  00.00.0| da10
   150  3  0  00.0  0  00.0  112.6| da11
 0  0  0  00.0  0  00.00.0| da0p1
 0  0  0  00.0  0  00.00.0| da0p2
 0  0  0  00.0  0  00.00.0| da0p3
 0  0  0  00.0  0  00.00.0| da1p1
 0  0  0  00.0  0  00.00.0| da1p2
 0  0  0  00.0  0  00.00.0| da1p3
 0  0  0  00.0  0  00.00.0| da0p4
 0  0  0  00.0  0  00.00.0| gpt/boot0
 0  0  0  00.0  0  00.00.0|
gptid/22659641-7ee6-11e6-9b56-0cc47aa41194
 0  0  0  00.0  0  00.00.0| gpt/zroot0
 0  0  0  00.0  0  00.00.0| gpt/esx0
 0  0  0  00.0  0  00.00.0| gpt/boot1
 0  0  0  00.0  0  00.00.0|
gptid/23c1fbec-7ee6-11e6-9b56-0cc47aa41194
 0  0  0  00.0  0  00.00.0| gpt/zroot1
 0  0  0  00.0  0  00.00.0| mirror/mirror
 0  0  0  00.0  0  00.00.0| da1p4
 0  0  0  00.0  0  00.00.0| gpt/esx1

The most funny thing is that da5,7-11 are SSD, with a capability of like
30K iops at their least.
So I wonder what is happening during this and why does it take that
long. Because usually pools are creating very fast.

Thanks.
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


zvol clone diffs

2016-09-22 Thread Eugene M. Zheganin
Hi.

I should mention from the start that this is a question about an
engineering task, not a question about FreeBSD issue.

I have a set of zvol clones that I redistribute over iSCSI. Several
Windows VMs use these clones as disks via their embedded iSCSI
initiators (each clone represents a disk with an NTFS partition, is
imported as a "foreign" disk and functions just fine). From my opinion,
they should not have any need to do additional writes on these clones
(each VM should only read data, from my point of view). But zfs shows
they do, and sometimes they write a lot of data, so clearly facts and
expactations differ a lot - obviously I didn't take something into
accounting.

Is there any way to figure out what these writes are ? Because I cannot
propose any simple enough method.

Thanks.
Eugene.
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


zfs/raidz and creation pause/blocking

2016-09-22 Thread Eugene M. Zheganin
Hi.

Recently I spent a lot of time setting up various zfs installations, and
I got a question.
Often when creating a raidz on disks considerably big (>~ 1T) I'm seeing
a weird stuff: "zpool create" blocks, and waits for several minutes. In
the same time system is fully responsive and I can see in gstat that the
kernel starts to tamper all the pool candidates sequentially at 100%
busy with iops around zero (in the example below, taken from a live
system, it's doing something with da11):

(zpool create gamestop raidz da5 da7 da8 da9 da10 da11)

dT: 1.064s  w: 1.000s
 L(q)  ops/sr/s   kBps   ms/rw/s   kBps   ms/w   %busy Name
0  0  0  00.0  0  00.00.0| da0
0  0  0  00.0  0  00.00.0| da1
0  0  0  00.0  0  00.00.0| da2
0  0  0  00.0  0  00.00.0| da3
0  0  0  00.0  0  00.00.0| da4
0  0  0  00.0  0  00.00.0| da5
0  0  0  00.0  0  00.00.0| da6
0  0  0  00.0  0  00.00.0| da7
0  0  0  00.0  0  00.00.0| da8
0  0  0  00.0  0  00.00.0| da9
0  0  0  00.0  0  00.00.0| da10
  150  3  0  00.0  0  00.0  112.6| da11
0  0  0  00.0  0  00.00.0| da0p1
0  0  0  00.0  0  00.00.0| da0p2
0  0  0  00.0  0  00.00.0| da0p3
0  0  0  00.0  0  00.00.0| da1p1
0  0  0  00.0  0  00.00.0| da1p2
0  0  0  00.0  0  00.00.0| da1p3
0  0  0  00.0  0  00.00.0| da0p4
0  0  0  00.0  0  00.00.0| gpt/boot0
0  0  0  00.0  0  00.00.0|
gptid/22659641-7ee6-11e6-9b56-0cc47aa41194
0  0  0  00.0  0  00.00.0| gpt/zroot0
0  0  0  00.0  0  00.00.0| gpt/esx0
0  0  0  00.0  0  00.00.0| gpt/boot1
0  0  0  00.0  0  00.00.0|
gptid/23c1fbec-7ee6-11e6-9b56-0cc47aa41194
0  0  0  00.0  0  00.00.0| gpt/zroot1
0  0  0  00.0  0  00.00.0| mirror/mirror
0  0  0  00.0  0  00.00.0| da1p4
0  0  0  00.0  0  00.00.0| gpt/esx1

The most funny thing is that da5,7-11 are SSD, with a capability of like
30K iops at their least.
So I wonder what is happening during this and why does it take that
long. Because usually pools are creating very fast.

Thanks.
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: 11.0 stuck on high network load

2016-09-22 Thread Slawa Olhovchenkov
On Thu, Sep 22, 2016 at 12:04:40PM +0200, Julien Charbon wrote:

> >>  These paths can indeed compete for the same INP lock, as both
> >> tcp_tw_2msl_scan() calls always start with the first inp found in
> >> twq_2msl list.  But in both cases, this first inp should be quickly used
> >> and its lock released anyway, thus that could explain your situation it
> >> that the TCP stack is doing that all the time, for example:
> >>
> >>  - Let say that you are running out completely and constantly of tcptw,
> >> and then all connections transitioning to TIME_WAIT state are competing
> >> with the TIME_WAIT timeout scan that tries to free all the expired
> >> tcptw.  If the stack is doing that all the time, it can appear like
> >> "live" locked.
> >>
> >>  This is just an hypothesis and as usual might be a red herring.
> >> Anyway, could you run:
> >>
> >> $ vmstat -z | head -2; vmstat -z | grep -E 'tcp|sock'
> > 
> > ITEM   SIZE  LIMIT USED FREE  REQ FAIL SLEEP
> > 
> > socket: 864, 4192664,   18604,   25348,49276158,   0,   0
> > tcp_inpcb:  464, 4192664,   34226,   18702,49250593,   0,   0
> > tcpcb: 1040, 4192665,   18424,   18953,49250593,   0,   0
> > tcptw:   88,  16425,   15802, 623,14526919,   8,   0
> > tcpreass:40,  32800,  15,2285,  632381,   0,   0
> > 
> > In normal case tcptw is about 16425/600/900
> > 
> > And after `sysctl -a | grep tcp` system stuck on serial console and I am 
> > reset it.
> > 
> >>  Ideally, once when everything is ok, and once when you have the issue
> >> to see the differences (if any).
> >>
> >>  If it appears your are quite low in tcptw, and if you have enough
> >> memory, could you try increase the tcptw limit using sysctl
> > 
> > I think this is not eliminate stuck, just may do it less frequency
> 
>  You are right, it would just be a big hint that the tcp_tw_2msl_scan()
> contention hypothesis is the right one.  As I see you have plenty of
> memory on your server, thus could you try with:
> 
> net.inet.tcp.maxtcptw=4192665
> 
>  And see what happen. Just to validate this hypothesis.

This is bad way for validate, with maxtcptw=16384 happened is random
and can be waited for month. After maxtcptw=4192665 I am don't know
how long need to wait for verification this hypothesis.

More frequency (may be 3-5 times per day) happening less traffic drops
(not to zero for minutes). May be this caused also by contention in
tcp_tw_2msl_scan, but fast resolved (stochastic process). By eating
CPU power nginx can't service connection and clients closed
connections and need more TIME_WAIT and can trigered
tcp_tw_2msl_scan(reuse=1). After this we can got live lock.

May be after I learning to catch and dignostic this validation is more
accurately.
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: nginx and FreeBSD11

2016-09-22 Thread Konstantin Belousov
On Thu, Sep 22, 2016 at 12:33:55PM +0300, Slawa Olhovchenkov wrote:
> Do you still need first 100 lines from verbose boot?
No.
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: 11.0 stuck on high network load

2016-09-22 Thread Julien Charbon

 Hi Slawa,

On 9/22/16 11:53 AM, Slawa Olhovchenkov wrote:
> On Wed, Sep 21, 2016 at 11:25:18PM +0200, Julien Charbon wrote:
>> On 9/21/16 9:51 PM, Slawa Olhovchenkov wrote:
>>> On Wed, Sep 21, 2016 at 09:11:24AM +0200, Julien Charbon wrote:
  You can also use Dtrace and lockstat (especially with the lockstat -s
 option):

 https://wiki.freebsd.org/DTrace/One-Liners#Kernel_Locks
 https://www.freebsd.org/cgi/man.cgi?query=lockstat&manpath=FreeBSD+11.0-RELEASE

  But I am less familiar with Dtrace/lockstat tools.
>>>
>>> I am still use old kernel and got lockdown again.
>>> Try using lockstat (I am save more output), interesting may be next:
>>>
>>> R/W writer spin on writer: 190019 events in 1.070 seconds (177571 
>>> events/sec)
>>>
>>> ---
>>> Count indv cuml rcnt nsec Lock   Caller 
>>>  
>>> 140839  74%  74% 0.0024659 tcpinp tcp_tw_2msl_scan+0xc6 
>>>   
>>>
>>>   nsec -- Time Distribution -- count Stack  
>>>  
>>>   4096 |   913   tcp_twstart+0xa3   
>>>  
>>>   8192 |   58191 tcp_do_segment+0x201f  
>>>  
>>>  16384 |@@ 29594 tcp_input+0xe1c
>>>  
>>>  32768 |   23447 ip_input+0x15f 
>>>  
>>>  65536 |@@@16197 
>>> 131072 |@  8674  
>>> 262144 |   3358  
>>> 524288 |   456   
>>>1048576 |   9 
>>> ---
>>> Count indv cuml rcnt nsec Lock   Caller 
>>>  
>>> 49180  26% 100% 0.0015929 tcpinp tcp_tw_2msl_scan+0xc6  
>>>  
>>>
>>>   nsec -- Time Distribution -- count Stack  
>>>  
>>>   4096 |   157   pfslowtimo+0x54
>>>  
>>>   8192 |@@@24796 
>>> softclock_call_cc+0x179 
>>>  16384 |@@ 11223 softclock+0x44 
>>>  
>>>  32768 |   7426  
>>> intr_event_execute_handlers+0x95
>>>  65536 |@@ 3918  
>>> 131072 |   1363  
>>> 262144 |   278   
>>> 524288 |   19
>>> ---
>>
>>  This is interesting, it seems that you have two call paths competing
>> for INP locks here:
>>
>>  - pfslowtimo()/tcp_tw_2msl_scan(reuse=0) and
>>
>>  - tcp_input()/tcp_twstart()/tcp_tw_2msl_scan(reuse=1)
> 
> I think same.
> 
>>  These paths can indeed compete for the same INP lock, as both
>> tcp_tw_2msl_scan() calls always start with the first inp found in
>> twq_2msl list.  But in both cases, this first inp should be quickly used
>> and its lock released anyway, thus that could explain your situation it
>> that the TCP stack is doing that all the time, for example:
>>
>>  - Let say that you are running out completely and constantly of tcptw,
>> and then all connections transitioning to TIME_WAIT state are competing
>> with the TIME_WAIT timeout scan that tries to free all the expired
>> tcptw.  If the stack is doing that all the time, it can appear like
>> "live" locked.
>>
>>  This is just an hypothesis and as usual might be a red herring.
>> Anyway, could you run:
>>
>> $ vmstat -z | head -2; vmstat -z | grep -E 'tcp|sock'
> 
> ITEM   SIZE  LIMIT USED FREE  REQ FAIL SLEEP
> 
> socket: 864, 4192664,   18604,   25348,49276158,   0,   0
> tcp_inpcb:  464, 4192664,   34226,   18702,49250593,   0,   0
> tcpcb: 1040, 4192665,   18424,   18953,49250593,   0,   0
> tcptw:   88,  16425,   15802, 623,14526919,   8,   0
> tcpreass:40,  32800,  15,2285,  632381,   0,   0
> 
> In normal case tcptw is about 16425/600/900
> 
> And after `sysctl -a | grep tcp` system stuck on serial console and I am 
> reset it.
> 
>>  Ideally, once when everything is ok, and once when you have the issue
>> to see the differences (if any).
>>
>>  If it appears your are quite low in tcptw, and if you have enough
>> memory, could you try increase the tcptw limit using sysctl
> 
> I think this is not eliminate stuck, just may do it less frequency

 You are right, it would just be a big hint that the tcp_tw_2msl_scan()
contention hypothesis is the right one.  As I see you have plenty of
memory on your server, thus could you try with:

net.inet.tcp.maxtcptw=4192665

 And see what happen. Ju

Re: 11.0 stuck on high network load

2016-09-22 Thread Slawa Olhovchenkov
On Wed, Sep 21, 2016 at 11:25:18PM +0200, Julien Charbon wrote:

> 
>  Hi Slawa,
> 
> On 9/21/16 9:51 PM, Slawa Olhovchenkov wrote:
> > On Wed, Sep 21, 2016 at 09:11:24AM +0200, Julien Charbon wrote:
> >>  You can also use Dtrace and lockstat (especially with the lockstat -s
> >> option):
> >>
> >> https://wiki.freebsd.org/DTrace/One-Liners#Kernel_Locks
> >> https://www.freebsd.org/cgi/man.cgi?query=lockstat&manpath=FreeBSD+11.0-RELEASE
> >>
> >>  But I am less familiar with Dtrace/lockstat tools.
> > 
> > I am still use old kernel and got lockdown again.
> > Try using lockstat (I am save more output), interesting may be next:
> > 
> > R/W writer spin on writer: 190019 events in 1.070 seconds (177571 
> > events/sec)
> > 
> > ---
> > Count indv cuml rcnt nsec Lock   Caller 
> >  
> > 140839  74%  74% 0.0024659 tcpinp tcp_tw_2msl_scan+0xc6 
> >   
> > 
> >   nsec -- Time Distribution -- count Stack  
> >  
> >   4096 |   913   tcp_twstart+0xa3   
> >  
> >   8192 |   58191 tcp_do_segment+0x201f  
> >  
> >  16384 |@@ 29594 tcp_input+0xe1c
> >  
> >  32768 |   23447 ip_input+0x15f 
> >  
> >  65536 |@@@16197 
> > 131072 |@  8674  
> > 262144 |   3358  
> > 524288 |   456   
> >1048576 |   9 
> > ---
> > Count indv cuml rcnt nsec Lock   Caller 
> >  
> > 49180  26% 100% 0.0015929 tcpinp tcp_tw_2msl_scan+0xc6  
> >  
> > 
> >   nsec -- Time Distribution -- count Stack  
> >  
> >   4096 |   157   pfslowtimo+0x54
> >  
> >   8192 |@@@24796 
> > softclock_call_cc+0x179 
> >  16384 |@@ 11223 softclock+0x44 
> >  
> >  32768 |   7426  
> > intr_event_execute_handlers+0x95
> >  65536 |@@ 3918  
> > 131072 |   1363  
> > 262144 |   278   
> > 524288 |   19
> > ---
> 
>  This is interesting, it seems that you have two call paths competing
> for INP locks here:
> 
>  - pfslowtimo()/tcp_tw_2msl_scan(reuse=0) and
> 
>  - tcp_input()/tcp_twstart()/tcp_tw_2msl_scan(reuse=1)

I think same.

>  These paths can indeed compete for the same INP lock, as both
> tcp_tw_2msl_scan() calls always start with the first inp found in
> twq_2msl list.  But in both cases, this first inp should be quickly used
> and its lock released anyway, thus that could explain your situation it
> that the TCP stack is doing that all the time, for example:
> 
>  - Let say that you are running out completely and constantly of tcptw,
> and then all connections transitioning to TIME_WAIT state are competing
> with the TIME_WAIT timeout scan that tries to free all the expired
> tcptw.  If the stack is doing that all the time, it can appear like
> "live" locked.
> 
>  This is just an hypothesis and as usual might be a red herring.
> Anyway, could you run:
> 
> $ vmstat -z | head -2; vmstat -z | grep -E 'tcp|sock'

ITEM   SIZE  LIMIT USED FREE  REQ FAIL SLEEP

socket: 864, 4192664,   18604,   25348,49276158,   0,   0
tcp_inpcb:  464, 4192664,   34226,   18702,49250593,   0,   0
tcpcb: 1040, 4192665,   18424,   18953,49250593,   0,   0
tcptw:   88,  16425,   15802, 623,14526919,   8,   0
tcpreass:40,  32800,  15,2285,  632381,   0,   0

In normal case tcptw is about 16425/600/900

And after `sysctl -a | grep tcp` system stuck on serial console and I am reset 
it.

>  Ideally, once when everything is ok, and once when you have the issue
> to see the differences (if any).
> 
>  If it appears your are quite low in tcptw, and if you have enough
> memory, could you try increase the tcptw limit using sysctl

I think this is not eliminate stuck, just may do it less frequency

> net.inet.tcp.maxtcptw?  And actually see if it improve (or not) your
> performance.

I am already play with net.inet.tcp.maxtcptw and it not affect performance.
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to 

Re: 11.0 stuck on high network load

2016-09-22 Thread Slawa Olhovchenkov
On Thu, Sep 22, 2016 at 11:28:38AM +0200, Julien Charbon wrote:

> >>> What purpose to not skip locked tcptw in this loop?
> >>
> >>  If I understand your question correctly:  According to your pmcstat
> >> result, tcp_tw_2msl_scan() currently struggles with a write lock
> >> (__rw_wlock_hard) and the only write lock used tcp_tw_2msl_scan() is
> >> INP_WLOCK.  No sign of contention on TW_RLOCK(V_tw_lock) currently.
> > 
> > As I see in code, tcp_tw_2msl_scan got first node from V_twq_2msl and
> > need got RW lock on inp w/o alternates. Can tcp_tw_2msl_scan skip current 
> > node
> > and go to next node in V_twq_2msl list if current node locked by some
> > reasson?
> 
>  Interesting question indeed:  It is not optimal that all simultaneous
> calls to tcp_tw_2msl_scan() compete for the same oldest tcptw.  The next
> tcptws in the list are certainly old enough also.
> 
>  Let me see if I can make a simple change that makes kernel threads
> calling tcp_tw_2msl_scan() at same time to work on a different old
> enough tcptws.  So far, I found only solutions quite complex to implement.

Simple solution is skip in each thread ncpu elemnts and skip curent
cpu number elements at start, if I understund you correctly.

___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: nginx and FreeBSD11

2016-09-22 Thread Slawa Olhovchenkov
On Thu, Sep 22, 2016 at 11:53:20AM +0300, Konstantin Belousov wrote:

> On Thu, Sep 22, 2016 at 11:34:24AM +0300, Slawa Olhovchenkov wrote:
> > On Thu, Sep 22, 2016 at 11:27:40AM +0300, Konstantin Belousov wrote:
> > 
> > > On Thu, Sep 22, 2016 at 11:25:27AM +0300, Slawa Olhovchenkov wrote:
> > > > On Thu, Sep 22, 2016 at 10:59:33AM +0300, Konstantin Belousov wrote:
> > > > > Below is, I believe, the committable fix, of course supposing that
> > > > > the patch above worked. If you want to retest it on stable/11, ignore
> > > > > efirt.c chunks.
> > > > 
> > > > and remove patch w/ spinlock?
> > > Yes.
> > 
> > What you prefer now -- I am test spinlock patch or this patch?
> > For success in any case need wait 2-3 days.
> 
> If you already run previous (spinlock) version for 1 day, then finish
> with it. I am confident that spinlock version results are indicative for
> the refined patch as well.
> 
> If you did not applied the spinlock variant at all, there is no reason to
> spend efforts on it, use the patch I sent today.

No, I am did not applied the spinlock variant at all.
OK, try this patch.
Do you still need first 100 lines from verbose boot?
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: 11.0 stuck on high network load

2016-09-22 Thread Julien Charbon

 Hi Slawa,

On 9/21/16 10:31 AM, Slawa Olhovchenkov wrote:
> On Wed, Sep 21, 2016 at 09:11:24AM +0200, Julien Charbon wrote:
>> On 9/20/16 10:26 PM, Slawa Olhovchenkov wrote:
>>> On Tue, Sep 20, 2016 at 10:00:25PM +0200, Julien Charbon wrote:
 On 9/19/16 10:43 PM, Slawa Olhovchenkov wrote:
> On Mon, Sep 19, 2016 at 10:32:13PM +0200, Julien Charbon wrote:
>>
>>> @ CPU_CLK_UNHALTED_CORE [4653445 samples]
>>>
>>> 51.86%  [2413083]  lock_delay @ /boot/kernel.VSTREAM/kernel
>>>  100.0%  [2413083]   __rw_wlock_hard
>>>   100.0%  [2413083]tcp_tw_2msl_scan
>>>99.99%  [2412958] pfslowtimo
>>> 100.0%  [2412958]  softclock_call_cc
>>>  100.0%  [2412958]   softclock
>>>   100.0%  [2412958]intr_event_execute_handlers
>>>100.0%  [2412958] ithread_loop
>>> 100.0%  [2412958]  fork_exit
>>>00.01%  [125] tcp_twstart
>>> 100.0%  [125]  tcp_do_segment
>>>  100.0%  [125]   tcp_input
>>>   100.0%  [125]ip_input
>>>100.0%  [125] swi_net
>>> 100.0%  [125]  intr_event_execute_handlers
>>>  100.0%  [125]   ithread_loop
>>>   100.0%  [125]fork_exit
>>
>>  The only write lock tcp_tw_2msl_scan() tries to get is a
>> INP_WLOCK(inp).  Thus here, tcp_tw_2msl_scan() seems to be stuck
>> spinning on INP_WLOCK (or pfslowtimo() is going crazy and calls
>> tcp_tw_2msl_scan() at high rate but this will be quite unexpected).
>>
>>  Thus my hypothesis is that something is holding the INP_WLOCK and not
>> releasing it, and tcp_tw_2msl_scan() is spinning on it.
>>
>>  If you can, could you compile the kernel with below options:
>>
>> optionsDDB # Support DDB.
>> optionsDEADLKRES   # Enable the deadlock resolver
>> optionsINVARIANTS  # Enable calls of extra sanity
>> checking
>> optionsINVARIANT_SUPPORT   # Extra sanity checks of internal
>> structures, required by INVARIANTS
>> optionsWITNESS # Enable checks to detect
>> deadlocks and cycles
>> optionsWITNESS_SKIPSPIN# Don't run witness on spinlocks
>> for speed
>
> Currently this host run with 100% CPU load (on all cores), i.e.
> enabling WITNESS will be significant drop performance.
> Can I use only some subset of options?
>
> Also, I can some troubles to DDB enter in this case.
> May be kgdb will be success (not tryed yet)?

  If these kernel options will certainly slow down your kernel, they also
 might found the root cause of your issue before reaching the point where
 you have 100% cpu load on all cores (thanks to INVARIANTS).  I would
 suggest:
>>>
>>> Hmmm, may be I am not clarified.
>>> This host run at peak hours with 100% CPU load as normal operation,
>>> this is for servering 2x10G, this is CPU load not result of lock
>>> issuse, this is not us case. And this is because I am fear to enable
>>> WITNESS -- I am fear drop performance.
>>>
>>> This lock issuse happen irregulary and may be caused by other issuse
>>> (nginx crashed). In this case about 1/3 cores have 100% cpu load,
>>> perhaps by this lock -- I am can trace only from one core and need
>>> more then hour for this (may be on other cores different trace, I
>>> can't guaranted anything).
>>
>>  I see, especially if you are running in production WITNESS might indeed
>> be not practical for you.  In this case, I would suggest before doing
>> WITNESS and still get more information to:
>>
>>  #0: Do a lock profiling:
>>
>> https://www.freebsd.org/cgi/man.cgi?query=LOCK_PROFILING
>>
>> options LOCK_PROFILING
>>
>>  Example of usage:
>>
>> # Run
>> $ sudo sysctl debug.lock.prof.enable=1
>> $ sleep 10
>> $ sudo sysctl debug.lock.prof.enable=0
>>
>> # Get results
>> $ sysctl debug.lock.prof.stats | head -2; sysctl debug.lock.prof.stats |
>> sort -n -k 4 -r
> 
> OK, but in case of leak lock (why inp lock too long for
> tcp_tw_2msl_scan?) I can't see cause of this lock running this
> commands after stuck happen?
> 
>>> What purpose to not skip locked tcptw in this loop?
>>
>>  If I understand your question correctly:  According to your pmcstat
>> result, tcp_tw_2msl_scan() currently struggles with a write lock
>> (__rw_wlock_hard) and the only write lock used tcp_tw_2msl_scan() is
>> INP_WLOCK.  No sign of contention on TW_RLOCK(V_tw_lock) currently.
> 
> As I see in code, tcp_tw_2msl_scan got first node from V_twq_2msl and
> need got RW lock on inp w/o alternates. Can tcp_tw_2msl_scan skip current node
> and go to next node in V_twq_2msl list if current node locked by some
> reasson?

 Interesting question indeed:  It is not optimal that all simultaneous
calls to tcp_tw_2msl_scan() compete for the s

Re: nginx and FreeBSD11

2016-09-22 Thread Konstantin Belousov
On Thu, Sep 22, 2016 at 11:34:24AM +0300, Slawa Olhovchenkov wrote:
> On Thu, Sep 22, 2016 at 11:27:40AM +0300, Konstantin Belousov wrote:
> 
> > On Thu, Sep 22, 2016 at 11:25:27AM +0300, Slawa Olhovchenkov wrote:
> > > On Thu, Sep 22, 2016 at 10:59:33AM +0300, Konstantin Belousov wrote:
> > > > Below is, I believe, the committable fix, of course supposing that
> > > > the patch above worked. If you want to retest it on stable/11, ignore
> > > > efirt.c chunks.
> > > 
> > > and remove patch w/ spinlock?
> > Yes.
> 
> What you prefer now -- I am test spinlock patch or this patch?
> For success in any case need wait 2-3 days.

If you already run previous (spinlock) version for 1 day, then finish
with it. I am confident that spinlock version results are indicative for
the refined patch as well.

If you did not applied the spinlock variant at all, there is no reason to
spend efforts on it, use the patch I sent today.
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: nginx and FreeBSD11

2016-09-22 Thread Slawa Olhovchenkov
On Thu, Sep 22, 2016 at 11:27:40AM +0300, Konstantin Belousov wrote:

> On Thu, Sep 22, 2016 at 11:25:27AM +0300, Slawa Olhovchenkov wrote:
> > On Thu, Sep 22, 2016 at 10:59:33AM +0300, Konstantin Belousov wrote:
> > > Below is, I believe, the committable fix, of course supposing that
> > > the patch above worked. If you want to retest it on stable/11, ignore
> > > efirt.c chunks.
> > 
> > and remove patch w/ spinlock?
> Yes.

What you prefer now -- I am test spinlock patch or this patch?
For success in any case need wait 2-3 days.
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: nginx and FreeBSD11

2016-09-22 Thread Konstantin Belousov
On Thu, Sep 22, 2016 at 11:25:27AM +0300, Slawa Olhovchenkov wrote:
> On Thu, Sep 22, 2016 at 10:59:33AM +0300, Konstantin Belousov wrote:
> > Below is, I believe, the committable fix, of course supposing that
> > the patch above worked. If you want to retest it on stable/11, ignore
> > efirt.c chunks.
> 
> and remove patch w/ spinlock?
Yes.
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: nginx and FreeBSD11

2016-09-22 Thread Slawa Olhovchenkov
On Thu, Sep 22, 2016 at 10:59:33AM +0300, Konstantin Belousov wrote:

> On Wed, Sep 21, 2016 at 12:15:17AM +0300, Konstantin Belousov wrote:
> > > > diff --git a/sys/vm/vm_map.c b/sys/vm/vm_map.c
> > > > index a23468e..f754652 100644
> > > > --- a/sys/vm/vm_map.c
> > > > +++ b/sys/vm/vm_map.c
> > > > @@ -481,6 +481,7 @@ vmspace_switch_aio(struct vmspace *newvm)
> > > > if (oldvm == newvm)
> > > > return;
> > > >  
> > > > +   spinlock_enter();
> > > > /*
> > > >  * Point to the new address space and refer to it.
> > > >  */
> > > > @@ -489,6 +490,7 @@ vmspace_switch_aio(struct vmspace *newvm)
> > > >  
> > > > /* Activate the new mapping. */
> > > > pmap_activate(curthread);
> > > > +   spinlock_exit();
> > > >  
> > > > /* Remove the daemon's reference to the old address space. */
> > > > KASSERT(oldvm->vm_refcnt > 1,
> Did you tested the patch ?

I am now installed it.
For success test need 2-3 days.
If test failed result may be quickly.

> Below is, I believe, the committable fix, of course supposing that
> the patch above worked. If you want to retest it on stable/11, ignore
> efirt.c chunks.

and remove patch w/ spinlock?

> diff --git a/sys/amd64/amd64/efirt.c b/sys/amd64/amd64/efirt.c
> index f1d67f7..c883af8 100644
> --- a/sys/amd64/amd64/efirt.c
> +++ b/sys/amd64/amd64/efirt.c
> @@ -53,6 +53,7 @@ __FBSDID("$FreeBSD$");
>  #include 
>  #include 
>  #include 
> +#include 
>  #include 
>  #include 
>  #include 
> @@ -301,6 +302,17 @@ efi_enter(void)
>   PMAP_UNLOCK(curpmap);
>   return (error);
>   }
> +
> + /*
> +  * IPI TLB shootdown handler invltlb_pcid_handler() reloads
> +  * %cr3 from the curpmap->pm_cr3, which would disable runtime
> +  * segments mappings.  Block the handler's action by setting
> +  * curpmap to impossible value.  See also comment in
> +  * pmap.c:pmap_activate_sw().
> +  */
> + if (pmap_pcid_enabled && !invpcid_works)
> + PCPU_SET(curpmap, NULL);
> +
>   load_cr3(VM_PAGE_TO_PHYS(efi_pml4_page) | (pmap_pcid_enabled ?
>   curpmap->pm_pcids[PCPU_GET(cpuid)].pm_pcid : 0));
>   /*
> @@ -317,7 +329,9 @@ efi_leave(void)
>  {
>   pmap_t curpmap;
>  
> - curpmap = PCPU_GET(curpmap);
> + curpmap = &curproc->p_vmspace->vm_pmap;
> + if (pmap_pcid_enabled && !invpcid_works)
> + PCPU_SET(curpmap, curpmap);
>   load_cr3(curpmap->pm_cr3 | (pmap_pcid_enabled ?
>   curpmap->pm_pcids[PCPU_GET(cpuid)].pm_pcid : 0));
>   if (!pmap_pcid_enabled)
> diff --git a/sys/amd64/amd64/pmap.c b/sys/amd64/amd64/pmap.c
> index 63042e4..59e1b67 100644
> --- a/sys/amd64/amd64/pmap.c
> +++ b/sys/amd64/amd64/pmap.c
> @@ -6842,6 +6842,7 @@ pmap_activate_sw(struct thread *td)
>  {
>   pmap_t oldpmap, pmap;
>   uint64_t cached, cr3;
> + register_t rflags;
>   u_int cpuid;
>  
>   oldpmap = PCPU_GET(curpmap);
> @@ -6865,16 +6866,43 @@ pmap_activate_sw(struct thread *td)
>   pmap == kernel_pmap,
>   ("non-kernel pmap thread %p pmap %p cpu %d pcid %#x",
>   td, pmap, cpuid, pmap->pm_pcids[cpuid].pm_pcid));
> +
> + /*
> +  * If the INVPCID instruction is not available,
> +  * invltlb_pcid_handler() is used for handle
> +  * invalidate_all IPI, which checks for curpmap ==
> +  * smp_tlb_pmap.  Below operations sequence has a
> +  * window where %CR3 is loaded with the new pmap's
> +  * PML4 address, but curpmap value is not yet updated.
> +  * This causes invltlb IPI handler, called between the
> +  * updates, to execute as NOP, which leaves stale TLB
> +  * entries.
> +  *
> +  * Note that the most typical use of
> +  * pmap_activate_sw(), from the context switch, is
> +  * immune to this race, because interrupts are
> +  * disabled (while the thread lock is owned), and IPI
> +  * happends after curpmap is updated.  Protect other
> +  * callers in a similar way, by disabling interrupts
> +  * around the %cr3 register reload and curpmap
> +  * assignment.
> +  */
> + if (!invpcid_works)
> + rflags = intr_disable();
> +
>   if (!cached || (cr3 & ~CR3_PCID_MASK) != pmap->pm_cr3) {
>   load_cr3(pmap->pm_cr3 | pmap->pm_pcids[cpuid].pm_pcid |
>   cached);
>   if (cached)
>   PCPU_INC(pm_save_cnt);
>   }
> + PCPU_SET(curpmap, pmap);
> + if (!invpcid_works)
> + intr_restore(rflags);
>   } else if (cr3 != pmap->pm_cr3) {
>   load_cr3(pmap->pm_cr3);
> + PCPU_SET(curpmap, pmap);
>   }
> -

Re: nginx and FreeBSD11

2016-09-22 Thread Konstantin Belousov
On Wed, Sep 21, 2016 at 12:15:17AM +0300, Konstantin Belousov wrote:
> > > diff --git a/sys/vm/vm_map.c b/sys/vm/vm_map.c
> > > index a23468e..f754652 100644
> > > --- a/sys/vm/vm_map.c
> > > +++ b/sys/vm/vm_map.c
> > > @@ -481,6 +481,7 @@ vmspace_switch_aio(struct vmspace *newvm)
> > >   if (oldvm == newvm)
> > >   return;
> > >  
> > > + spinlock_enter();
> > >   /*
> > >* Point to the new address space and refer to it.
> > >*/
> > > @@ -489,6 +490,7 @@ vmspace_switch_aio(struct vmspace *newvm)
> > >  
> > >   /* Activate the new mapping. */
> > >   pmap_activate(curthread);
> > > + spinlock_exit();
> > >  
> > >   /* Remove the daemon's reference to the old address space. */
> > >   KASSERT(oldvm->vm_refcnt > 1,
Did you tested the patch ?

Below is, I believe, the committable fix, of course supposing that
the patch above worked. If you want to retest it on stable/11, ignore
efirt.c chunks.

diff --git a/sys/amd64/amd64/efirt.c b/sys/amd64/amd64/efirt.c
index f1d67f7..c883af8 100644
--- a/sys/amd64/amd64/efirt.c
+++ b/sys/amd64/amd64/efirt.c
@@ -53,6 +53,7 @@ __FBSDID("$FreeBSD$");
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -301,6 +302,17 @@ efi_enter(void)
PMAP_UNLOCK(curpmap);
return (error);
}
+
+   /*
+* IPI TLB shootdown handler invltlb_pcid_handler() reloads
+* %cr3 from the curpmap->pm_cr3, which would disable runtime
+* segments mappings.  Block the handler's action by setting
+* curpmap to impossible value.  See also comment in
+* pmap.c:pmap_activate_sw().
+*/
+   if (pmap_pcid_enabled && !invpcid_works)
+   PCPU_SET(curpmap, NULL);
+
load_cr3(VM_PAGE_TO_PHYS(efi_pml4_page) | (pmap_pcid_enabled ?
curpmap->pm_pcids[PCPU_GET(cpuid)].pm_pcid : 0));
/*
@@ -317,7 +329,9 @@ efi_leave(void)
 {
pmap_t curpmap;
 
-   curpmap = PCPU_GET(curpmap);
+   curpmap = &curproc->p_vmspace->vm_pmap;
+   if (pmap_pcid_enabled && !invpcid_works)
+   PCPU_SET(curpmap, curpmap);
load_cr3(curpmap->pm_cr3 | (pmap_pcid_enabled ?
curpmap->pm_pcids[PCPU_GET(cpuid)].pm_pcid : 0));
if (!pmap_pcid_enabled)
diff --git a/sys/amd64/amd64/pmap.c b/sys/amd64/amd64/pmap.c
index 63042e4..59e1b67 100644
--- a/sys/amd64/amd64/pmap.c
+++ b/sys/amd64/amd64/pmap.c
@@ -6842,6 +6842,7 @@ pmap_activate_sw(struct thread *td)
 {
pmap_t oldpmap, pmap;
uint64_t cached, cr3;
+   register_t rflags;
u_int cpuid;
 
oldpmap = PCPU_GET(curpmap);
@@ -6865,16 +6866,43 @@ pmap_activate_sw(struct thread *td)
pmap == kernel_pmap,
("non-kernel pmap thread %p pmap %p cpu %d pcid %#x",
td, pmap, cpuid, pmap->pm_pcids[cpuid].pm_pcid));
+
+   /*
+* If the INVPCID instruction is not available,
+* invltlb_pcid_handler() is used for handle
+* invalidate_all IPI, which checks for curpmap ==
+* smp_tlb_pmap.  Below operations sequence has a
+* window where %CR3 is loaded with the new pmap's
+* PML4 address, but curpmap value is not yet updated.
+* This causes invltlb IPI handler, called between the
+* updates, to execute as NOP, which leaves stale TLB
+* entries.
+*
+* Note that the most typical use of
+* pmap_activate_sw(), from the context switch, is
+* immune to this race, because interrupts are
+* disabled (while the thread lock is owned), and IPI
+* happends after curpmap is updated.  Protect other
+* callers in a similar way, by disabling interrupts
+* around the %cr3 register reload and curpmap
+* assignment.
+*/
+   if (!invpcid_works)
+   rflags = intr_disable();
+
if (!cached || (cr3 & ~CR3_PCID_MASK) != pmap->pm_cr3) {
load_cr3(pmap->pm_cr3 | pmap->pm_pcids[cpuid].pm_pcid |
cached);
if (cached)
PCPU_INC(pm_save_cnt);
}
+   PCPU_SET(curpmap, pmap);
+   if (!invpcid_works)
+   intr_restore(rflags);
} else if (cr3 != pmap->pm_cr3) {
load_cr3(pmap->pm_cr3);
+   PCPU_SET(curpmap, pmap);
}
-   PCPU_SET(curpmap, pmap);
 #ifdef SMP
CPU_CLR_ATOMIC(cpuid, &oldpmap->pm_active);
 #else
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Jenkins build became unstable: FreeBSD_stable_10 #406

2016-09-22 Thread jenkins-admin
See 

___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"