TB --- 2012-10-03 01:10:00 - tinderbox 2.9 running on freebsd-current.sentex.ca
TB --- 2012-10-03 01:10:00 - FreeBSD freebsd-current.sentex.ca 8.3-PRERELEASE
FreeBSD 8.3-PRERELEASE #0: Mon Mar 26 13:54:12 EDT 2012
d...@freebsd-current.sentex.ca:/usr/obj/usr/src/sys/GENERIC amd64
TB ---
TB --- 2012-10-03 10:30:00 - tinderbox 2.9 running on freebsd-current.sentex.ca
TB --- 2012-10-03 10:30:00 - FreeBSD freebsd-current.sentex.ca 8.3-PRERELEASE
FreeBSD 8.3-PRERELEASE #0: Mon Mar 26 13:54:12 EDT 2012
d...@freebsd-current.sentex.ca:/usr/obj/usr/src/sys/GENERIC amd64
TB ---
FreeBSD Tinderbox wrote:
cc -c -O -pipe -std=c99 -g -Wall -Wredundant-decls -Wnested-externs -Wstrict
-prototypes -Wmissing-prototypes -Wpointer-arith -Winline -Wcast-qual -Wundef
-Wno-pointer-sign -fformat-extensions -Wmissing-include-dirs -fdiagnostics-sh
ow-option -nostdinc -I.
TB --- 2012-10-03 10:30:00 - tinderbox 2.9 running on freebsd-current.sentex.ca
TB --- 2012-10-03 10:30:00 - FreeBSD freebsd-current.sentex.ca 8.3-PRERELEASE
FreeBSD 8.3-PRERELEASE #0: Mon Mar 26 13:54:12 EDT 2012
d...@freebsd-current.sentex.ca:/usr/obj/usr/src/sys/GENERIC amd64
TB ---
W dniu 2012-09-19 22:22, Lev Serebryakov pisze:
Hello, Freebsd-current.
I've upgraded my FreeBSD-CURRENT Virtual machine, which I use to
build router's NanoBSD image, to today's morning (MSK time, GMT+4)
revision. Unfortunately, I cannot provide exact version, as sources
are in this
Hi everyone.
Actually 65k sockets is incredibly easy to reach.
I manage some servers for a very large website, it currently has several
http servers clustered to handle daily traffic and this is only dynamic
content, static has its own servers, databases also have own servers.
We recently
On Wed, Oct 3, 2012 at 11:45 AM, free...@chrysalisnet.org wrote:
Hi everyone.
Actually 65k sockets is incredibly easy to reach.
I manage some servers for a very large website, it currently has several
http servers clustered to handle daily traffic and this is only dynamic
content, static
Hi,
somaxconn is the connection queue depth. If it's sitting at a couple
hundred thousand then something else is going crazily wrong.
I understand your frustration, but there's a lot of instances where
the application just isn't doing things right and the OS tries to
hide it as much as psosible.
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Hi,
On 10/03/12 11:45, free...@chrysalisnet.org wrote:
Hi everyone.
Actually 65k sockets is incredibly easy to reach.
No, this is not kern.ipc.maxsockets.
kern.ipc.somaxconn is for baclkog and not the maximum connections.
Accumulating 64K of
On 3 October 2012 13:01, Garrett Cooper yaneg...@gmail.com wrote:
Here's where it's being held at 65535 (sys/kern/kern_uipc.c):
3276 static int
3277 sysctl_somaxconn(SYSCTL_HANDLER_ARGS)
3278 {
3279 int error;
3280 int val;
3281
3282 val = somaxconn;
3283
On 03.10.2012 22:03, Adrian Chadd wrote:
Hi,
somaxconn is the connection queue depth. If it's sitting at a couple
hundred thousand then something else is going crazily wrong.
I understand your frustration, but there's a lot of instances where
the application just isn't doing things right and
On Wed, Oct 3, 2012 at 1:03 PM, Adrian Chadd adr...@freebsd.org wrote:
Hi,
somaxconn is the connection queue depth. If it's sitting at a couple
hundred thousand then something else is going crazily wrong.
I understand your frustration, but there's a lot of instances where
the application
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
On 10/03/12 13:47, Garrett Cooper wrote:
On Wed, Oct 3, 2012 at 1:03 PM, Adrian Chadd adr...@freebsd.org
wrote:
Hi,
somaxconn is the connection queue depth. If it's sitting at a
couple hundred thousand then something else is going crazily
Or the TTL of TCP connections might be too high for the volume of
connections received. Someone else on net@ reported that changing
this value to more aggressively reap sockets improved performance
greatly (at the cost that more connections potentially needing to
be reestablished and/or
On Wed, Oct 3, 2012 at 3:03 PM, Ryan Stone ryst...@gmail.com wrote:
Or the TTL of TCP connections might be too high for the volume of
connections received. Someone else on net@ reported that changing
this value to more aggressively reap sockets improved performance
greatly (at the cost that
15 matches
Mail list logo