Re: kern.ipc.nmbclusters

2005-03-16 Thread Charles Swiger
On Mar 16, 2005, at 4:44 PM, kalin mintchev wrote:
You were exceeding the amount of socket buffer memory available there.
i'm aware of that. the question is why?
The literal answer is that this pool of open connections with lots of 
unsent data is clogging things up.  Why those connections are not going 
away is the real question to figure out

FIN_WAIT_1 means that one side of the TCP conversation sent a FIN, and
the other side (yours) wants to flush the queue of unsent data and 
will
then close the connection.  It's not clear why this isn't working, and
there is a timer which gets started which ought to close the 
connection
after 10 minutes or so if no data can be sent.
well that was what i was suggesting in my post but the sever is set to 
cut
inactive connections after 10 seconds - not minutes. is there any other
timer i'm missing here?
You are probably referring to the KeepAlive directive in the Apache 
config file, but there are other timers present in the TCP stack 
itself: specificly, the one described in RFC-793 around section 3.5, 
involving a 2 * MSL wait:

"3.5.  Closing a Connection
  CLOSE is an operation meaning "I have no more data to send."  The
  notion of closing a full-duplex connection is subject to ambiguous
  interpretation, of course, since it may not be obvious how to treat
  the receiving side of the connection.  We have chosen to treat CLOSE
  in a simplex fashion.  The user who CLOSEs may continue to RECEIVE
  until he is told that the other side has CLOSED also.  Thus, a program
  could initiate several SENDs followed by a CLOSE, and then continue to
  RECEIVE until signaled that a RECEIVE failed because the other side
  has CLOSED.  We assume that the TCP will signal a user, even if no
  RECEIVEs are outstanding, that the other side has closed, so the user
  can terminate his side gracefully.  A TCP will reliably deliver all
  buffers SENT before the connection was CLOSED so a user who expects no
  data in return need only wait to hear the connection was CLOSED
  successfully to know that all his data was received at the destination
  TCP.  Users must keep reading connections they close for sending until
  the TCP says no more data."
--
-Chuck
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: kern.ipc.nmbclusters

2005-03-16 Thread kalin mintchev
thanks Charles...

>
> You were exceeding the amount of socket buffer memory available there.

i'm aware of that. the question is why?

>>> huge difference. so i think about 260 lines of netstat -p tcp output
>>> like:
>>>
>>> tcp4   0  33580  server.http  c68.112.166.214..3307
>>> FIN_WAIT_1
>>>
>>> has to do with all these 6000 clusters. but i'm not sure how. DOS may
>>> be?!
>>> they are all from the same client ip and all of them have much higher
>>> number for send then received Q's. what does the state FIN_WAIT_1
>>> mean?
>>> waiting to finish? if so - why it didn't do that for hours and hours.
>>> my
>>> web server keeps connections alive for 10 sec. there isn't much else
>>> that
>>> uses tcp on that machine. the webserver was inaccessile for about 5-10
>>> min. so my first thought was DOS... "11125 requests for memory denied"
>>> made it look like it was a DOS...
>>>
>>> maybe somebody can explain the relation if any. it'll be
>>> appreciated...
>
> FIN_WAIT_1 means that one side of the TCP conversation sent a FIN, and
> the other side (yours) wants to flush the queue of unsent data and will
> then close the connection.  It's not clear why this isn't working, and
> there is a timer which gets started which ought to close the connection
> after 10 minutes or so if no data can be sent.

well that was what i was suggesting in my post but the sever is set to cut
inactive connections after 10 seconds - not minutes. is there any other
timer i'm missing here?

>
> Perhaps the other side is playing games?  If you do a tcpdump against
> that client, are you seeing responses with a 0 window size?

that happened yesterday - 1/2 hr ago. right now it is fine... quite
i tought DOS. it hasn't happened before. right now using only 250-300
clusters which is the normal...

thanks...


>
> --
> -Chuck
>
>


--


___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: kern.ipc.nmbclusters

2005-03-16 Thread Charles Swiger
On Mar 16, 2005, at 3:01 PM, kalin mintchev wrote:
11125 requests for memory denied
1 requests for memory delayed
0 calls to protocol drain routines
You were exceeding the amount of socket buffer memory available there.
huge difference. so i think about 260 lines of netstat -p tcp output 
like:

tcp4   0  33580  server.http  c68.112.166.214..3307
FIN_WAIT_1
has to do with all these 6000 clusters. but i'm not sure how. DOS may 
be?!
they are all from the same client ip and all of them have much higher
number for send then received Q's. what does the state FIN_WAIT_1 
mean?
waiting to finish? if so - why it didn't do that for hours and hours. 
my
web server keeps connections alive for 10 sec. there isn't much else 
that
uses tcp on that machine. the webserver was inaccessile for about 5-10
min. so my first thought was DOS... "11125 requests for memory denied"
made it look like it was a DOS...

maybe somebody can explain the relation if any. it'll be 
appreciated...
FIN_WAIT_1 means that one side of the TCP conversation sent a FIN, and 
the other side (yours) wants to flush the queue of unsent data and will 
then close the connection.  It's not clear why this isn't working, and 
there is a timer which gets started which ought to close the connection 
after 10 minutes or so if no data can be sent.

Perhaps the other side is playing games?  If you do a tcpdump against 
that client, are you seeing responses with a 0 window size?

--
-Chuck
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: kern.ipc.nmbclusters

2005-03-16 Thread kalin mintchev

where else can i ask about this?
i tried bsdforums but still total silence there too...

>
>> Did you check top to see if you even use swap?
>
> yea. very small amount.
> Swap: 2032M Total, 624K Used, 2031M Free
>
>>  I never use swap with
>> 512MB on my desktop.  Read man tuning, around byte 32372.
>
> i did a few times. don't remember which byte was it thought...
>
>> Try netstat
>> -m.
>
> i did. here:
> this was when i send this message originally:
> # netstat -m
> 6138/6832/26624 mbufs in use (current/peak/max):
> 6137 mbufs allocated to data
> 1 mbufs allocated to fragment reassembly queue headers
> 6092/6656/6656 mbuf clusters in use (current/peak/max)
> 15020 Kbytes allocated to network (75% of mb_map in use)
> 11125 requests for memory denied
> 1 requests for memory delayed
> 0 calls to protocol drain routines
>
> ===
>
> this is now:
>
> netstat -m
> 349/6832/26624 mbufs in use (current/peak/max):
> 348 mbufs allocated to data
> 1 mbufs allocated to fragment reassembly queue headers
> 346/6656/6656 mbuf clusters in use (current/peak/max)
> 15020 Kbytes allocated to network (75% of mb_map in use)
> 11125 requests for memory denied
> 1 requests for memory delayed
> 0 calls to protocol drain routines
>
>
> huge difference. so i think about 260 lines of netstat -p tcp output like:
>
> tcp4   0  33580  server.http  c68.112.166.214..3307
> FIN_WAIT_1
>
> has to do with all these 6000 clusters. but i'm not sure how. DOS may be?!
> they are all from the same client ip and all of them have much higher
> number for send then received Q's. what does the state FIN_WAIT_1 mean?
> waiting to finish? if so - why it didn't do that for hours and hours. my
> web server keeps connections alive for 10 sec. there isn't much else that
> uses tcp on that machine. the webserver was inaccessile for about 5-10
> min. so my first thought was DOS... "11125 requests for memory denied"
> made it look like it was a DOS...
>
> maybe somebody can explain the relation if any. it'll be appreciated...
>
>
> thanks
>
>
>>
>> ___
>> freebsd-questions@freebsd.org mailing list
>> http://lists.freebsd.org/mailman/listinfo/freebsd-questions
>> To unsubscribe, send any mail to
>> "[EMAIL PROTECTED]"
>>
>
>
> --
>
>
> ___
> freebsd-questions@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-questions
> To unsubscribe, send any mail to
> "[EMAIL PROTECTED]"
>


--


___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: kern.ipc.nmbclusters

2005-03-15 Thread kalin mintchev

> Did you check top to see if you even use swap?

yea. very small amount.
Swap: 2032M Total, 624K Used, 2031M Free

>  I never use swap with
> 512MB on my desktop.  Read man tuning, around byte 32372.

i did a few times. don't remember which byte was it thought...

> Try netstat
> -m.

i did. here:
this was when i send this message originally:
# netstat -m
6138/6832/26624 mbufs in use (current/peak/max):
6137 mbufs allocated to data
1 mbufs allocated to fragment reassembly queue headers
6092/6656/6656 mbuf clusters in use (current/peak/max)
15020 Kbytes allocated to network (75% of mb_map in use)
11125 requests for memory denied
1 requests for memory delayed
0 calls to protocol drain routines

===

this is now:

netstat -m
349/6832/26624 mbufs in use (current/peak/max):
348 mbufs allocated to data
1 mbufs allocated to fragment reassembly queue headers
346/6656/6656 mbuf clusters in use (current/peak/max)
15020 Kbytes allocated to network (75% of mb_map in use)
11125 requests for memory denied
1 requests for memory delayed
0 calls to protocol drain routines


huge difference. so i think about 260 lines of netstat -p tcp output like:

tcp4   0  33580  server.http  c68.112.166.214..3307 
FIN_WAIT_1

has to do with all these 6000 clusters. but i'm not sure how. DOS may be?!
they are all from the same client ip and all of them have much higher
number for send then received Q's. what does the state FIN_WAIT_1 mean?
waiting to finish? if so - why it didn't do that for hours and hours. my
web server keeps connections alive for 10 sec. there isn't much else that
uses tcp on that machine. the webserver was inaccessile for about 5-10
min. so my first thought was DOS... "11125 requests for memory denied"
made it look like it was a DOS...

maybe somebody can explain the relation if any. it'll be appreciated...


thanks


>
> ___
> freebsd-questions@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-questions
> To unsubscribe, send any mail to
> "[EMAIL PROTECTED]"
>


--


___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: kern.ipc.nmbclusters

2005-03-15 Thread Jason Henson
On 03/15/05 18:02:22, kalin mintchev wrote:
ok.. to day for a first time ever i saw this in my logs:
 /kernel: All mbuf clusters exhausted
so i gotta up the kern.ipc.nmbclusters..
also what would be a decent nmbclusters to specify in the loader for  
a
gig
or ram and 2 gigs of swap?

how many mbufs per cluster?
also why is this client stuck in the netstat. how come Send-Q is so
much?:
Did you check top to see if you even use swap?  I never use swap with  
512MB on my desktop.  Read man tuning, around byte 32372.  Try netstat  
-m.

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: kern.ipc.nmbclusters in version 4.11

2005-02-17 Thread Chris
I think kern.ipc.nmbclusters is a kernel setting in 4.11 not loader.conf

Chris


On Wed, 16 Feb 2005 11:30:14 -0800 (PST), ann kok <[EMAIL PROTECTED]> wrote:
> Hi all
> 
> I installed freebsd 4.11 in amd64 machine.
> 
> but I can't set kern.ipc.nmbclusters in loader.conf
> 
> It reboots automatically!
> 
> pls help
> 
> Thank you
> 
> __
> Do you Yahoo!?
> Yahoo! Mail - Find what you need with new enhanced search.
> http://info.mail.yahoo.com/mail_250
> ___
> freebsd-questions@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-questions
> To unsubscribe, send any mail to "[EMAIL PROTECTED]"
>
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: kern.ipc.nmbclusters

2004-06-30 Thread Mark Terribile

Steve Bertrand <[EMAIL PROTECTED]> writes:

> I have a machine that is rebooting with the following error:
> 
> "All mbuf clusters exhausted, please see tuning(7)."
> 
> Which through google and man tuning I was able to figure out that indeed,
> mbufs were exhausted. So I tried to set kern.ipc.nmbclusters=4096 (which
> should cover the load of the server), but found out after it is not a
> run-time tunable parameter.

This doesn't answer the question asked, but it may be useful.  A few years
ago (and a few releases ago) I was working on a network box that had to run
under fairly heavy load.  This was a product, and we were not satisfied
with less than 100% CPU, about 6000 network stimuli/second, about 220
transaction/sec on each of five disks, etc.  (On a 1GHz PIII)

I discovered that I couldn't make the mbuf cluster number large enough,
and that the system was prone to panic under sufficiently heavy load.
Sufficiently heavy meant that we had tens of seconds of traffic queued.

The solution was to shorten the TCP listen/accept queues.  I cut them down
to six on each file descriptor, and used kqueue/kevent (then just introduced)
to schedule the work intelligently.  I was able to push the box to near
paralysis with 80% overload (most of it rejected because the input queues
were full) but the box always recovered, and it ran at 10% overload with
only a small latency degradation.

The max accept queue parameter may be worth a look; YMMV.

Mark Terribile




__
Do you Yahoo!?
New and Improved Yahoo! Mail - Send 10MB messages!
http://promotions.yahoo.com/new_mail 
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: kern.ipc.nmbclusters

2004-06-30 Thread Steve Bertrand

>> >From your experience, where is the best place to load this variable
>> from,
>> why is it a better location, and what will happen if I don't load it
>> from
>> the proper place?
>
> You have to put it in loader.conf because that value is set _very_ early
> in the boot process (before sysctl.conf is used) and can not be changed
> later.
>
> You can also put this value in your kernel config and recompile your
> kernel.

Thanks Bill. It is not completely clear which syntax would be right for
the file...this:

kern.ipc.nmbclusters=4096

or this:

kern.ipc.nmbclusters="4096"

I certainly don't need an unbootable box with 1500 mail accounts on it :o)

Tks,

Steve

>
> --
> Bill Moran
> Potential Technologies
> http://www.potentialtech.com
> ___
> [EMAIL PROTECTED] mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-questions
> To unsubscribe, send any mail to
> "[EMAIL PROTECTED]"
>


___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: kern.ipc.nmbclusters

2004-06-30 Thread Bill Moran
"Steve Bertrand" <[EMAIL PROTECTED]> wrote:

> I have a machine that is rebooting with the following error:
> 
> "All mbuf clusters exhausted, please see tuning(7)."
> 
> Which through google and man tuning I was able to figure out that indeed,
> mbufs were exhausted. So I tried to set kern.ipc.nmbclusters=4096 (which
> should cover the load of the server), but found out after it is not a
> run-time tunable parameter.
> 
> I searched google, and gathered that I should put this setting in
> /boot/loader.conf.
> 
> This is contradictory of me usually putting kernel tweaks in
> /etc/sysctl.conf.
> 
> >From your experience, where is the best place to load this variable from,
> why is it a better location, and what will happen if I don't load it from
> the proper place?

You have to put it in loader.conf because that value is set _very_ early
in the boot process (before sysctl.conf is used) and can not be changed
later.

You can also put this value in your kernel config and recompile your kernel.

-- 
Bill Moran
Potential Technologies
http://www.potentialtech.com
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"