Re: collecting pv entries -- suggest increasing PMAP_SHPGPERPROC

2008-10-29 Thread Mel
On Tuesday 28 October 2008 15:44:49 Francis Dubé wrote:
 Jeremy Chadwick a écrit :
  On Mon, Oct 27, 2008 at 12:56:30PM -0700, Chuck Swiger wrote:
  On Oct 27, 2008, at 12:38 PM, FreeBSD wrote:
  You need to keep your MaxClients setting limited to what your system
  can run under high load; generally the amount of system memory is the
  governing factor. [1]  If you set your MaxClients higher than that,
  your system will start swapping under the load and once you start
  hitting VM, it's game over: your throughput will plummet and clients
  will start getting lots of broken connections, just as you describe.
 
  According to top, we have about 2G of Inactive RAM with 1,5G Active
  (4G total RAM with amd64). Swapping is not a problem in this case.
 
  With 4GB of RAM, you're less likely to run into issues, but the most
  relevant numbers would be the Swap: line in top under high load, or the
  output of vmstat 1 / vmstat -s.

 We're monitoring our swap with cacti, and we've never been swapping even
 during high load because we dont let apache spawn enough process to do so.

  It would also be helpful to know what your httpd's are looking like in
  terms of size, and what your content is like.  For Apache serving mostly
  static content and not including mod_perl, mod_php, etc, you tend to
  have 5-10MB processes and much of that is shared, so you might well be
  able to run 400+ httpd children.  On the other hand, as soon as you pull
  in the dynamic language modules like perl or PHP, you end up with much
  larger process sizes (20 - 40 MB) and much more of their memory usage is
  per-process rather than shared, so even with 4GB you probably won't be
  able to run more than 100-150 children before swapping.

 Here's an example of top's output regarding our httpd process :
 54326 apache1  960   156M 13108K select 1   0:00  0.15% httpd
 54952 apache1  960   156M 12684K select 1   0:00  0.10% httpd
 52343 apache1   40   155M 12280K select 0   0:01  0.10% httpd

 Most of our page are in HTML with a LOT of images. Few PHP pages, very
 light PHP processing.

Then your best bet is to properly set up mod_expires for the images. More then 
anything else that will reduce the load on your server.

http://httpd.apache.org/docs/2.2/mod/mod_expires.html#expiresbytype

The better solution involves recoding your site to serve images from a 
different webserver (can be the same machine, simply use a very light jail). 
This installation only loads mod_expires and mod_headers and serves images 
only.

I would do this regardless, for two reasons:
1) You will almost certainly get rid of PMAP_SHPGPERPROC on the document 
server
2) You will more easily detect bottlenecks in scripts, because the problem is 
not aggrevated/masked by the image serving.
-- 
Mel

Problem with today's modular software: they start with the modules
and never get to the software part.
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: collecting pv entries -- suggest increasing PMAP_SHPGPERPROC

2008-10-28 Thread Francis Dubé

Jeremy Chadwick a écrit :

On Mon, Oct 27, 2008 at 12:56:30PM -0700, Chuck Swiger wrote:
  

On Oct 27, 2008, at 12:38 PM, FreeBSD wrote:

You need to keep your MaxClients setting limited to what your system 
can run under high load; generally the amount of system memory is the 
governing factor. [1]  If you set your MaxClients higher than that, 
your system will start swapping under the load and once you start 
hitting VM, it's game over: your throughput will plummet and clients 
will start getting lots of broken connections, just as you describe.

According to top, we have about 2G of Inactive RAM with 1,5G Active  
(4G total RAM with amd64). Swapping is not a problem in this case.
  
With 4GB of RAM, you're less likely to run into issues, but the most  
relevant numbers would be the Swap: line in top under high load, or the 
output of vmstat 1 / vmstat -s.

We're monitoring our swap with cacti, and we've never been swapping even 
during high load because we dont let apache spawn enough process to do so.
It would also be helpful to know what your httpd's are looking like in  
terms of size, and what your content is like.  For Apache serving mostly 
static content and not including mod_perl, mod_php, etc, you tend to have 
5-10MB processes and much of that is shared, so you might well be able to 
run 400+ httpd children.  On the other hand, as soon as you pull in the 
dynamic language modules like perl or PHP, you end up with much larger 
process sizes (20 - 40 MB) and much more of their memory usage is 
per-process rather than shared, so even with 4GB you probably won't be 
able to run more than 100-150 children before swapping.


Here's an example of top's output regarding our httpd process :
54326 apache1  960   156M 13108K select 1   0:00  0.15% httpd
54952 apache1  960   156M 12684K select 1   0:00  0.10% httpd
52343 apache1   40   155M 12280K select 0   0:01  0.10% httpd

Most of our page are in HTML with a LOT of images. Few PHP pages, very 
light PHP processing.


156M x 450 process = way more RAM than what we have (same for RES). 
Concretely, how must I interpret these results ?

After checking multiple things (MySQL, networks, CPU, RAM) when a drop 
occurs, we determined that everytimes there is drop, the number is 
Apache's process is MaxClients (ps aux | grep httpd | wc -l) and the 
new http request doesn't get answer from Apache (the TCP hanshakes 
completes but Apache never push the data).
  
Yes, that aspect is going to be the same pretty much no matter what the 
bottleneck is or how large you set MaxClients to.  You will end up with 
significantly better results (fewer drops, higher aggregate throughput) 
if you tune appropriately than if you try to ramp MaxClients up further 
than the available hardware can support.

At the moment, the MaxClients isn't higher than the hardware can 
support, but it will if i raise it again. I've gone through the Apache's 
doc concerning high load tuning and did all that can be done on the fly 
(without downtime). We were already planning to add another webserver 
soon, but maybe not that soon :)
You might find that checking out the URLs being most commonly listed in 
http://yourdomain.com/server-status when you run into high load problems 
will point towards a particular script or dynamic content which is 
causing a bottleneck.


Thanks for the tips, i'll take a look at this.


One of the problems here is that the individual reporting the problem is
basing all of his conclusions on the first couple lines of top(1)
output, and is not bothering to look at per-process RSS or SZ.  I have
lots of Inactive RAM, so what's the problem!??!

We should probably take the time to explain to the user the fact that
shared pages per process != amount of RAM that's been touched/used at
one point but is currently unused.  Without someone explaining how the
VM works in this regard, he's going to continue to be confused and
correlate things which aren't necessarily related.
  
Right ! I would really appreciate few explanation on this. Do the shared 
pages counts as active or inactive RAM ? How can i calculate how much 
physical RAM an apache process is taking ? How the VM works in this 
regard ? ;)


Thanks everyone !

Francis Dube
RD
Optik Securite
www.optiksecurite.com

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: collecting pv entries -- suggest increasing PMAP_SHPGPERPROC

2008-10-28 Thread Jeremy Chadwick
On Tue, Oct 28, 2008 at 10:44:49AM -0400, Francis Dubé wrote:
 Jeremy Chadwick a écrit :
 On Mon, Oct 27, 2008 at 12:56:30PM -0700, Chuck Swiger wrote:
   
 On Oct 27, 2008, at 12:38 PM, FreeBSD wrote:
 
 You need to keep your MaxClients setting limited to what your 
 system can run under high load; generally the amount of system 
 memory is the governing factor. [1]  If you set your MaxClients 
 higher than that, your system will start swapping under the load 
 and once you start hitting VM, it's game over: your throughput 
 will plummet and clients will start getting lots of broken 
 connections, just as you describe.
 
 According to top, we have about 2G of Inactive RAM with 1,5G Active 
  (4G total RAM with amd64). Swapping is not a problem in this case.
   
 With 4GB of RAM, you're less likely to run into issues, but the most  
 relevant numbers would be the Swap: line in top under high load, or 
 the output of vmstat 1 / vmstat -s.
 
 We're monitoring our swap with cacti, and we've never been swapping even  
 during high load because we dont let apache spawn enough process to do 
 so.

I'm not sure you fully understand the concept of swapping (the term can
be used for a multitude of things).  :-)  Some processes which sit
idle/unused will have portions of their memory swapped out (to
swap/disk) to allow for actively running processes to utilise physical
memory.  This is something to keep in mind.

 It would also be helpful to know what your httpd's are looking like 
 in  terms of size, and what your content is like.  For Apache serving 
 mostly static content and not including mod_perl, mod_php, etc, you 
 tend to have 5-10MB processes and much of that is shared, so you 
 might well be able to run 400+ httpd children.  On the other hand, as 
 soon as you pull in the dynamic language modules like perl or PHP, 
 you end up with much larger process sizes (20 - 40 MB) and much more 
 of their memory usage is per-process rather than shared, so even with 
 4GB you probably won't be able to run more than 100-150 children 
 before swapping.
 
 Here's an example of top's output regarding our httpd process :
 54326 apache1  960   156M 13108K select 1   0:00  0.15% httpd
 54952 apache1  960   156M 12684K select 1   0:00  0.10% httpd
 52343 apache1   40   155M 12280K select 0   0:01  0.10% httpd

 Most of our page are in HTML with a LOT of images. Few PHP pages, very  
 light PHP processing.

 156M x 450 process = way more RAM than what we have (same for RES).  
 Concretely, how must I interpret these results ?

It's as I expected -- you don't understand the difference between
SIZE (SZ) and RES (RSS).  The simple version:

SIZE == amount of memory that's shared across all processes on the
machine, e.g. shared libraries.  It doesn't mean 156MB is being taken
up per process.

RES == amount of memory that's specifically allocated to that individual
process.  The three httpd processes above are taking up a total of
~38MBytes of memory (13108K + 12684K + 12280K).

 Right ! I would really appreciate few explanation on this. Do the shared  
 pages counts as active or inactive RAM ? How can i calculate how much  
 physical RAM an apache process is taking ? How the VM works in this  
 regard ? ;)

Others will have to explain the shared memory/pages aspect, as it's
beyond my understanding.  But recent versions of 7.0 and 7.1-PRERELEASE
contain a tool called procstat(1) which can help you break down the
memory usage within a process.

-- 
| Jeremy Chadwickjdc at parodius.com |
| Parodius Networking   http://www.parodius.com/ |
| UNIX Systems Administrator  Mountain View, CA, USA |
| Making life hard for others since 1977.  PGP: 4BD6C0CB |

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: collecting pv entries -- suggest increasing PMAP_SHPGPERPROC

2008-10-28 Thread Francis Dubé

Jeremy Chadwick a écrit :

On Tue, Oct 28, 2008 at 10:44:49AM -0400, Francis Dubé wrote:
  

Jeremy Chadwick a écrit :


On Mon, Oct 27, 2008 at 12:56:30PM -0700, Chuck Swiger wrote:
  
  

On Oct 27, 2008, at 12:38 PM, FreeBSD wrote:


You need to keep your MaxClients setting limited to what your 
system can run under high load; generally the amount of system 
memory is the governing factor. [1]  If you set your MaxClients 
higher than that, your system will start swapping under the load 
and once you start hitting VM, it's game over: your throughput 
will plummet and clients will start getting lots of broken 
connections, just as you describe.


According to top, we have about 2G of Inactive RAM with 1,5G Active 
 (4G total RAM with amd64). Swapping is not a problem in this case.
  
  
With 4GB of RAM, you're less likely to run into issues, but the most  
relevant numbers would be the Swap: line in top under high load, or 
the output of vmstat 1 / vmstat -s.


We're monitoring our swap with cacti, and we've never been swapping even  
during high load because we dont let apache spawn enough process to do 
so.



I'm not sure you fully understand the concept of swapping (the term can
be used for a multitude of things).  :-)  Some processes which sit
idle/unused will have portions of their memory swapped out (to
swap/disk) to allow for actively running processes to utilise physical
memory.  This is something to keep in mind.

  
It would also be helpful to know what your httpd's are looking like 
in  terms of size, and what your content is like.  For Apache serving 
mostly static content and not including mod_perl, mod_php, etc, you 
tend to have 5-10MB processes and much of that is shared, so you 
might well be able to run 400+ httpd children.  On the other hand, as 
soon as you pull in the dynamic language modules like perl or PHP, 
you end up with much larger process sizes (20 - 40 MB) and much more 
of their memory usage is per-process rather than shared, so even with 
4GB you probably won't be able to run more than 100-150 children 
before swapping.



Here's an example of top's output regarding our httpd process :
54326 apache1  960   156M 13108K select 1   0:00  0.15% httpd
54952 apache1  960   156M 12684K select 1   0:00  0.10% httpd
52343 apache1   40   155M 12280K select 0   0:01  0.10% httpd

Most of our page are in HTML with a LOT of images. Few PHP pages, very  
light PHP processing.


156M x 450 process = way more RAM than what we have (same for RES).  
Concretely, how must I interpret these results ?



It's as I expected -- you don't understand the difference between
SIZE (SZ) and RES (RSS).  The simple version:

SIZE == amount of memory that's shared across all processes on the
machine, e.g. shared libraries.  It doesn't mean 156MB is being taken
up per process.

RES == amount of memory that's specifically allocated to that individual
process.  The three httpd processes above are taking up a total of
~38MBytes of memory (13108K + 12684K + 12280K).
  


As I said, even with RES the numbers dont seems to have any sense.

Let's say 12500K x 450 = ~5500MBytes. Considering there's a lot of 
process other than Apache running on the server...there's something 
wrong. Is there something shared in RES too ?


  
Right ! I would really appreciate few explanation on this. Do the shared  
pages counts as active or inactive RAM ? How can i calculate how much  
physical RAM an apache process is taking ? How the VM works in this  
regard ? ;)



Others will have to explain the shared memory/pages aspect, as it's
beyond my understanding.  But recent versions of 7.0 and 7.1-PRERELEASE
contain a tool called procstat(1) which can help you break down the
memory usage within a process.

  

Our next server will be in 7.0 for sure.

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: collecting pv entries -- suggest increasing PMAP_SHPGPERPROC

2008-10-28 Thread Chuck Swiger

On Oct 28, 2008, at 9:49 AM, Francis Dubé wrote:

Here's an example of top's output regarding our httpd process :
54326 apache1  960   156M 13108K select 1   0:00   
0.15% httpd
54952 apache1  960   156M 12684K select 1   0:00   
0.10% httpd
52343 apache1   40   155M 12280K select 0   0:01   
0.10% httpd


Most of our page are in HTML with a LOT of images. Few PHP pages,  
very light PHP processing.


156M x 450 process = way more RAM than what we have (same for  
RES).  Concretely, how must I interpret these results?


First, your Apache children are huge, at least for FreeBSD.  :-)   
Also, they are mostly paged out, which suggests your system is under  
significant VM pressure, but the vmstat output would be helpful to  
confirm.



It's as I expected -- you don't understand the difference between
SIZE (SZ) and RES (RSS).  The simple version:

SIZE == amount of memory that's shared across all processes on the
machine, e.g. shared libraries.  It doesn't mean 156MB is being  
taken

up per process.


SIZE == the amount of VM address space allocated by the process.

It includes things shared (copy-on-write) between many processes like  
the shared libraries; it also includes memory-mapped files  
(including .so's like apache modules being loaded into the process),  
VM allocated but not yet used by malloc()/brk(), the stack, and so  
forth.


RES == amount of memory that's specifically allocated to that  
individual

process.  The three httpd processes above are taking up a total of
~38MBytes of memory (13108K + 12684K + 12280K).


RES == the amount of process VM that is resident in actual physical  
RAM; the rest of the process is paged out to the swapfile or  
filesystem for memory-mapped files.



As I said, even with RES the numbers dont seems to have any sense.

Let's say 12500K x 450 = ~5500MBytes. Considering there's a lot of  
process other than Apache running on the server...there's something  
wrong. Is there something shared in RES too ?


Yep.  Quite probably a lot, but the amount of memory which is specific  
to just that process is not easily found from FreeBSD's top,  
regrettably.


For the sake of example, and because the same explanation applies  
pretty closly to FreeBSD, consider an httpd running on a MacOSX  
system.  Here's top output, which includes columns RPRVT for  
resident memory used by just this process, RSHRD which is  
resident, shared with other processes, RSIZE which is FreeBSD's  
RES, and VSIZE, which is FreeBSD's SIZE:


Processes:  136 total, 4 running, 132 sleeping... 215 threads 
11:06:40
Load Avg:  1.71, 1.66, 1.62 CPU usage:  12.5% user, 59.7% sys,  
27.8% idle
SharedLibs: num =  141, resident = 18.3M code, 2.92M data, 6.40M  
LinkEdit

MemRegions: num = 10360, resident =  101M + 5.91M private,  159M shared
PhysMem:   159M wired,  252M active, 99.0M inactive,  510M used, 1.50G  
free

VM: 7.16G + 88.8M   1378510(0) pageins, 88743(0) pageouts

  PID COMMAND  %CPU   TIME   #TH #PRTS #MREGS RPRVT  RSHRD   
RSIZE  VSIZE
 2868 httpd0.0% 43:21.28   11292  1.82M   144M   
72.9M   169M
 2869 httpd0.0% 46:29.45   11292  1.95M   144M   
73.2M   169M
 2870 httpd0.0% 46:55.84   11292  1.89M   144M   
73.0M   169M


...and the vmmap command, documented here:

  
http://developer.apple.com/documentation/Darwin/Reference/ManPages/man1/vmmap.1.html

...provides detailed info about a single process' VM usage:

# vmmap 2870
Virtual Memory Map of process 2870 (httpd)
Output report format:  2.0

 Non-writable regions for process 2870
__PAGEZERO -1000 [4K] ---/--- SM=NUL  /usr/ 
sbin/httpd
__TEXT 1000-0005 [  316K] r-x/rwx SM=COW  /usr/ 
sbin/httpd
__LINKEDIT 0005a000-00065000 [   44K] r--/rwx SM=COW  /usr/ 
sbin/httpd
__TEXT 00065000-00068000 [   12K] r-x/rwx SM=COW  /usr/ 
libexec/httpd/mod_log_config.so
__LINKEDIT 00069000-0006a000 [4K] r--/rwx SM=COW  /usr/ 
libexec/httpd/mod_log_config.so
__TEXT 0006a000-0006c000 [8K] r-x/rwx SM=COW  /usr/ 
libexec/httpd/mod_mime.so
__LINKEDIT 0006d000-0006e000 [4K] r--/rwx SM=COW  /usr/ 
libexec/httpd/mod_mime.so

[ ... ]
__DATA a1a0e000-a1a2 [   72K] r--/r-- SM=COW  /usr/ 
lib/libcrypto.0.9.7.dylib
__DATA a1a2-a1a23000 [   12K] r--/r-- SM=COW  /usr/ 
lib/libcrypto.0.9.7.dylib
__DATA a4f2c000-a4f2f000 [   12K] r--/r-- SM=COW  /usr/ 
lib/libssl.0.9.7.dylib
__DATA a7233000-a7235000 [8K] r--/r-- SM=NUL  / 
System/Library/Perl/lib/5.8/libperl.dylib
system fffec000-fffef000 [   12K] ---/rwx SM=NUL   
commpage [libobjc.A.dylib]
system fffef000- [4K] r-x/rwx SM=COW   
commpage [libobjc.A.dylib]
system 8000-a000 [8K] r--/r-- SM=SHM   
commpage [libSystem.B.dylib]


 

Re: collecting pv entries -- suggest increasing PMAP_SHPGPERPROC

2008-10-28 Thread Francis Dubé

Chuck Swiger a écrit :

On Oct 28, 2008, at 9:49 AM, Francis Dubé wrote:

Here's an example of top's output regarding our httpd process :
54326 apache1  960   156M 13108K select 1   0:00  0.15% 
httpd
54952 apache1  960   156M 12684K select 1   0:00  0.10% 
httpd
52343 apache1   40   155M 12280K select 0   0:01  0.10% 
httpd


Most of our page are in HTML with a LOT of images. Few PHP pages, 
very light PHP processing.


156M x 450 process = way more RAM than what we have (same for 
RES).  Concretely, how must I interpret these results?


First, your Apache children are huge, at least for FreeBSD.  :-)  
Also, they are mostly paged out, which suggests your system is under 
significant VM pressure, but the vmstat output would be helpful to 
confirm.

I'll try to remove some useless modules.

Here's the output of vmstat, i should've pasted it in the previous 
message, sorry about that.


vmstat -s
3777275311 cpu context switches
2105577673 device interrupts
359873900 software interrupts
2670696893 traps
2635245695 system calls
  43 kernel threads created
4271  fork() calls
  824925 vfork() calls
   0 rfork() calls
5130 swap pager pageins
7513 swap pager pages paged in
5266 swap pager pageouts
   10722 swap pager pages paged out
  518980 vnode pager pageins
 1659001 vnode pager pages paged in
 5717865 vnode pager pageouts
11193440 vnode pager pages paged out
5530 page daemon wakeups
140578661 pages examined by the page daemon
 1701262 pages reactivated
2968698933 copy-on-write faults
 3856240 copy-on-write optimized faults
4090371353 zero fill pages zeroed
3851399420 zero fill pages prezeroed
  457318 intransit blocking page faults
1628587285 total VM faults taken
   0 pages affected by kernel thread creation
3234876655 pages affected by  fork()
88075637 pages affected by vfork()
   0 pages affected by rfork()
1591911567 pages freed
   2 pages freed by daemon
4139768534 pages freed by exiting processes
  331854 pages active
  367993 pages inactive
   41103 pages in VM cache
  118472 pages wired down
   93794 pages free
4096 bytes per page
125580578352 total name lookups
 cache hits (98% pos + 0% neg) system 0% per-directory
 deletions 0%, falsehits 0%, toolong 0%



It's as I expected -- you don't understand the difference between
SIZE (SZ) and RES (RSS).  The simple version:

SIZE == amount of memory that's shared across all processes on the
machine, e.g. shared libraries.  It doesn't mean 156MB is being taken
up per process.


SIZE == the amount of VM address space allocated by the process.

It includes things shared (copy-on-write) between many processes like 
the shared libraries; it also includes memory-mapped files (including 
.so's like apache modules being loaded into the process), VM allocated 
but not yet used by malloc()/brk(), the stack, and so forth.


RES == amount of memory that's specifically allocated to that 
individual

process.  The three httpd processes above are taking up a total of
~38MBytes of memory (13108K + 12684K + 12280K).


RES == the amount of process VM that is resident in actual physical 
RAM; the rest of the process is paged out to the swapfile or 
filesystem for memory-mapped files.



As I said, even with RES the numbers dont seems to have any sense.

Let's say 12500K x 450 = ~5500MBytes. Considering there's a lot of 
process other than Apache running on the server...there's something 
wrong. Is there something shared in RES too ?


Yep.  Quite probably a lot, but the amount of memory which is specific 
to just that process is not easily found from FreeBSD's top, regrettably.


For the sake of example, and because the same explanation applies 
pretty closly to FreeBSD, consider an httpd running on a MacOSX 
system.  Here's top output, which includes columns RPRVT for 
resident memory used by just this process, RSHRD which is 
resident, shared with other processes, RSIZE which is FreeBSD's 
RES, and VSIZE, which is FreeBSD's SIZE:


Processes:  136 total, 4 running, 132 sleeping... 215 threads
11:06:40
Load Avg:  1.71, 1.66, 1.62 CPU usage:  12.5% user, 59.7% sys, 
27.8% idle

SharedLibs: num =  141, resident = 18.3M code, 2.92M data, 6.40M LinkEdit
MemRegions: num = 10360, resident =  101M + 5.91M private,  159M shared
PhysMem:   159M wired,  252M active, 99.0M inactive,  510M used, 1.50G 
free

VM: 7.16G + 88.8M   1378510(0) pageins, 88743(0) pageouts

  PID COMMAND  %CPU   TIME   #TH #PRTS #MREGS RPRVT  RSHRD  RSIZE  
VSIZE
 2868 httpd0.0% 43:21.28   11292  1.82M   144M  
72.9M   169M
 2869 httpd0.0% 46:29.45   11292  1.95M   144M  
73.2M   169M
 2870 httpd0.0% 46:55.84   11292  1.89M   144M  
73.0M   169M


...and the vmmap command, documented here:

  
http://developer.apple.com/documentation/Darwin/Reference/ManPages/man1/vmmap.1.html 



...provides detailed info about a single process' VM usage:

# vmmap 

collecting pv entries -- suggest increasing PMAP_SHPGPERPROC

2008-10-27 Thread Francis Dubé

Hi everyone,

I'm running a a webserver on FreeBSD (6.2-RELEASE-p6) and I have this 
error in my logs :


collecting pv entries -- suggest increasing PMAP_SHPGPERPROC

I've read that this is mainly caused by Apache spawning too many 
processes. Everyone seems to suggest to decrease the MaxClients 
directive in Apache(set to 450 at the moment), but here's the 
problem...i need to increase it ! During peaks all the processes are in 
use, we even have little drops sometime because there isn't enough 
processes to serve the requests. Our traffic is increasing slowly over 
time so i'm affraid that it'll become a real problem soon. Any tips on 
how I could deal with this situation, Apache's or FreBSD's side ?


Here's the useful part of my conf :

Apache/2.2.4, compiled with prefork mpm.
httpd.conf :
[...]
IfModule mpm_prefork_module
   ServerLimit 450
   StartServers  5
   MinSpareServers   5
   MaxSpareServers  10
   MaxClients  450
   MaxRequestsPerChild   0
/IfModule

KeepAlive On
KeepAliveTimeout 15
MaxKeepAliveRequests 500
[...]


Francis Dube
RD
Optik Securite
www.optiksecurite.com
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: collecting pv entries -- suggest increasing PMAP_SHPGPERPROC

2008-10-27 Thread Chuck Swiger

On Oct 27, 2008, at 11:39 AM, Francis Dubé wrote:
I've read that this is mainly caused by Apache spawning too many  
processes. Everyone seems to suggest to decrease the MaxClients  
directive in Apache(set to 450 at the moment), but here's the  
problem...i need to increase it ! During peaks all the processes are  
in use, we even have little drops sometime because there isn't  
enough processes to serve the requests. Our traffic is increasing  
slowly over time so i'm affraid that it'll become a real problem  
soon. Any tips on how I could deal with this situation, Apache's or  
FreBSD's side ?


You need to keep your MaxClients setting limited to what your system  
can run under high load; generally the amount of system memory is the  
governing factor. [1]  If you set your MaxClients higher than that,  
your system will start swapping under the load and once you start  
hitting VM, it's game over: your throughput will plummet and clients  
will start getting lots of broken connections, just as you describe.


For a rough starting point, divide system RAM by httpd's typical  
resident memory size.  If your load legitimately exceeds this, you'll  
need to beef up the machine or run multiple webserver boxes behind a  
load-balancer (IPFW round-robin or similar with PF is a starting  
point, but something like a Netscaler or Foundry ServerIron are what  
the big websites generally use).


--
-Chuck

[1]: There can be other bottlenecks; sometimes poorly written external  
cgi-bin scripts or dynamic content coming from mod_perl, mod_php, etc  
can demand a lot of CPU or end up blocking on some resource (ie, DB  
locking) and choking the webserver performance before it runs out of  
RAM.  But you can run a site getting several million hits a day on a  
Sun E250 with only 1GB of RAM and 2 x ~400MHz CPU.  :-)___

freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: collecting pv entries -- suggest increasing PMAP_SHPGPERPROC

2008-10-27 Thread Simon Chang
 collecting pv entries -- suggest increasing PMAP_SHPGPERPROC

 I've read that this is mainly caused by Apache spawning too many processes.
 Everyone seems to suggest to decrease the MaxClients directive in Apache(set
 to 450 at the moment), but here's the problem...i need to increase it !
 During peaks all the processes are in use, we even have little drops
 sometime because there isn't enough processes to serve the requests. Our
 traffic is increasing slowly over time so i'm affraid that it'll become a
 real problem soon. Any tips on how I could deal with this situation,
 Apache's or FreBSD's side ?

On page 85 of Michael Lucas' Absolute BSD, there is a solution to
your problem that someone else had come across before.  The solution
involves (1) increasing the PMAP_SHPGPERPROC parameter in the kernel
to a higher value and rebuilding the kernel, and (2) increasing the
amount of physical RAM to complement it.

For more details, go to

http://books.google.com/books?id=vebgS-r9fP8Cpg=PA85lpg=PA85dq=Michael+Lucas+collecting+pv+entriessource=webots=9Fl2T_Uyqisig=6LgchiUI5r0NTL6PaK3sxnFuIBIhl=ensa=Xoi=book_resultresnum=1ct=result

Good luck,

Simon Chang
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: collecting pv entries -- suggest increasing PMAP_SHPGPERPROC

2008-10-27 Thread FreeBSD

Chuck Swiger a écrit :

On Oct 27, 2008, at 11:39 AM, Francis Dubé wrote:
I've read that this is mainly caused by Apache spawning too many 
processes. Everyone seems to suggest to decrease the MaxClients 
directive in Apache(set to 450 at the moment), but here's the 
problem...i need to increase it ! During peaks all the processes are 
in use, we even have little drops sometime because there isn't enough 
processes to serve the requests. Our traffic is increasing slowly over 
time so i'm affraid that it'll become a real problem soon. Any tips on 
how I could deal with this situation, Apache's or FreBSD's side ?


You need to keep your MaxClients setting limited to what your system can 
run under high load; generally the amount of system memory is the 
governing factor. [1]  If you set your MaxClients higher than that, your 
system will start swapping under the load and once you start hitting VM, 
it's game over: your throughput will plummet and clients will start 
getting lots of broken connections, just as you describe.




According to top, we have about 2G of Inactive RAM with 1,5G Active (4G 
total RAM with amd64). Swapping is not a problem in this case. After 
checking multiple things (MySQL, networks, CPU, RAM) when a drop occurs, 
we determined that everytimes there is drop, the number is Apache's 
process is MaxClients (ps aux | grep httpd | wc -l) and the new http 
request doesn't get answer from Apache (the TCP hanshakes completes but 
Apache never push the data).


Thanks for your reply!

For a rough starting point, divide system RAM by httpd's typical 
resident memory size.  If your load legitimately exceeds this, you'll 
need to beef up the machine or run multiple webserver boxes behind a 
load-balancer (IPFW round-robin or similar with PF is a starting point, 
but something like a Netscaler or Foundry ServerIron are what the big 
websites generally use).




___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: collecting pv entries -- suggest increasing PMAP_SHPGPERPROC

2008-10-27 Thread FreeBSD

Simon Chang a écrit :

collecting pv entries -- suggest increasing PMAP_SHPGPERPROC

I've read that this is mainly caused by Apache spawning too many processes.
Everyone seems to suggest to decrease the MaxClients directive in Apache(set
to 450 at the moment), but here's the problem...i need to increase it !
During peaks all the processes are in use, we even have little drops
sometime because there isn't enough processes to serve the requests. Our
traffic is increasing slowly over time so i'm affraid that it'll become a
real problem soon. Any tips on how I could deal with this situation,
Apache's or FreBSD's side ?


On page 85 of Michael Lucas' Absolute BSD, there is a solution to
your problem that someone else had come across before.  The solution
involves (1) increasing the PMAP_SHPGPERPROC parameter in the kernel
to a higher value and rebuilding the kernel, and (2) increasing the
amount of physical RAM to complement it.

For more details, go to

http://books.google.com/books?id=vebgS-r9fP8Cpg=PA85lpg=PA85dq=Michael+Lucas+collecting+pv+entriessource=webots=9Fl2T_Uyqisig=6LgchiUI5r0NTL6PaK3sxnFuIBIhl=ensa=Xoi=book_resultresnum=1ct=result

Good luck,

Simon Chang


Thanks for the links, pretty helpful but this server is the only 
production web server we have. I don't really like the idea of 
recompiling the kernel with a new option...


I don't really understand why we are getting this error since there is 
plenty of Inactive RAM in the system (2G inactive on a 4G server with 
amd64). Is this a normal error in this case?


Thank you for your quick reply.
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: collecting pv entries -- suggest increasing PMAP_SHPGPERPROC

2008-10-27 Thread Chuck Swiger

On Oct 27, 2008, at 12:38 PM, FreeBSD wrote:
You need to keep your MaxClients setting limited to what your  
system can run under high load; generally the amount of system  
memory is the governing factor. [1]  If you set your MaxClients  
higher than that, your system will start swapping under the load  
and once you start hitting VM, it's game over: your throughput will  
plummet and clients will start getting lots of broken connections,  
just as you describe.


According to top, we have about 2G of Inactive RAM with 1,5G Active  
(4G total RAM with amd64). Swapping is not a problem in this case.


With 4GB of RAM, you're less likely to run into issues, but the most  
relevant numbers would be the Swap: line in top under high load, or  
the output of vmstat 1 / vmstat -s.


It would also be helpful to know what your httpd's are looking like in  
terms of size, and what your content is like.  For Apache serving  
mostly static content and not including mod_perl, mod_php, etc, you  
tend to have 5-10MB processes and much of that is shared, so you might  
well be able to run 400+ httpd children.  On the other hand, as soon  
as you pull in the dynamic language modules like perl or PHP, you end  
up with much larger process sizes (20 - 40 MB) and much more of their  
memory usage is per-process rather than shared, so even with 4GB you  
probably won't be able to run more than 100-150 children before  
swapping.


After checking multiple things (MySQL, networks, CPU, RAM) when a  
drop occurs, we determined that everytimes there is drop, the number  
is Apache's process is MaxClients (ps aux | grep httpd | wc -l) and  
the new http request doesn't get answer from Apache (the TCP  
hanshakes completes but Apache never push the data).


Yes, that aspect is going to be the same pretty much no matter what  
the bottleneck is or how large you set MaxClients to.  You will end up  
with significantly better results (fewer drops, higher aggregate  
throughput) if you tune appropriately than if you try to ramp  
MaxClients up further than the available hardware can support.


You might find that checking out the URLs being most commonly listed  
in http://yourdomain.com/server-status when you run into high load  
problems will point towards a particular script or dynamic content  
which is causing a bottleneck.


Regards,
--
-Chuck


___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: collecting pv entries -- suggest increasing PMAP_SHPGPERPROC

2008-10-27 Thread Jeremy Chadwick
On Mon, Oct 27, 2008 at 12:56:30PM -0700, Chuck Swiger wrote:
 On Oct 27, 2008, at 12:38 PM, FreeBSD wrote:
 You need to keep your MaxClients setting limited to what your system 
 can run under high load; generally the amount of system memory is the 
 governing factor. [1]  If you set your MaxClients higher than that, 
 your system will start swapping under the load and once you start 
 hitting VM, it's game over: your throughput will plummet and clients 
 will start getting lots of broken connections, just as you describe.

 According to top, we have about 2G of Inactive RAM with 1,5G Active  
 (4G total RAM with amd64). Swapping is not a problem in this case.

 With 4GB of RAM, you're less likely to run into issues, but the most  
 relevant numbers would be the Swap: line in top under high load, or the 
 output of vmstat 1 / vmstat -s.

 It would also be helpful to know what your httpd's are looking like in  
 terms of size, and what your content is like.  For Apache serving mostly 
 static content and not including mod_perl, mod_php, etc, you tend to have 
 5-10MB processes and much of that is shared, so you might well be able to 
 run 400+ httpd children.  On the other hand, as soon as you pull in the 
 dynamic language modules like perl or PHP, you end up with much larger 
 process sizes (20 - 40 MB) and much more of their memory usage is 
 per-process rather than shared, so even with 4GB you probably won't be 
 able to run more than 100-150 children before swapping.

 After checking multiple things (MySQL, networks, CPU, RAM) when a drop 
 occurs, we determined that everytimes there is drop, the number is 
 Apache's process is MaxClients (ps aux | grep httpd | wc -l) and the 
 new http request doesn't get answer from Apache (the TCP hanshakes 
 completes but Apache never push the data).

 Yes, that aspect is going to be the same pretty much no matter what the 
 bottleneck is or how large you set MaxClients to.  You will end up with 
 significantly better results (fewer drops, higher aggregate throughput) 
 if you tune appropriately than if you try to ramp MaxClients up further 
 than the available hardware can support.

 You might find that checking out the URLs being most commonly listed in 
 http://yourdomain.com/server-status when you run into high load problems 
 will point towards a particular script or dynamic content which is 
 causing a bottleneck.

One of the problems here is that the individual reporting the problem is
basing all of his conclusions on the first couple lines of top(1)
output, and is not bothering to look at per-process RSS or SZ.  I have
lots of Inactive RAM, so what's the problem!??!

We should probably take the time to explain to the user the fact that
shared pages per process != amount of RAM that's been touched/used at
one point but is currently unused.  Without someone explaining how the
VM works in this regard, he's going to continue to be confused and
correlate things which aren't necessarily related.

-- 
| Jeremy Chadwickjdc at parodius.com |
| Parodius Networking   http://www.parodius.com/ |
| UNIX Systems Administrator  Mountain View, CA, USA |
| Making life hard for others since 1977.  PGP: 4BD6C0CB |

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: collecting pv entries -- suggest increasing PMAP_SHPGPERPROC

2008-10-27 Thread Bill Moran
In response to FreeBSD [EMAIL PROTECTED]:

 Simon Chang a écrit :
  collecting pv entries -- suggest increasing PMAP_SHPGPERPROC
 
  I've read that this is mainly caused by Apache spawning too many processes.
  Everyone seems to suggest to decrease the MaxClients directive in 
  Apache(set
  to 450 at the moment), but here's the problem...i need to increase it !
  During peaks all the processes are in use, we even have little drops
  sometime because there isn't enough processes to serve the requests. Our
  traffic is increasing slowly over time so i'm affraid that it'll become a
  real problem soon. Any tips on how I could deal with this situation,
  Apache's or FreBSD's side ?

[snip]

 I don't really understand why we are getting this error since there is 
 plenty of Inactive RAM in the system (2G inactive on a 4G server with 
 amd64). Is this a normal error in this case?

It's not about physical RAM, it's about kernel tables that are tracking
RAM usage per process.  When situations occur that cause these tables to
fill up, the kernel can't track RAM any more (even if it has plenty) so
it has to scan the entire table to garbage collect unused PV entries.
Depending on the exact circumstance, this usually won't hurt much, but
it does create a performance problem while the kernel is working on it.

Raising PMAP_SHPGPERPROC works most of the time.  You can also re-tune
your Apache setting to keep processes from constantly spawning and
dying.  For example, set the max spare and min spare servers settings
higher, so Apache keeps more spare servers around instead of spawning
them on demand and killing them when the demand ends.

Another option is to upgrade to 7.X, which seems to have replaced the
mechanism by which this is done to be more dynamic and not have this
problem.

-- 
Bill Moran
http://www.potentialtech.com
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: collecting pv entries -- suggest increasing PMAP_SHPGPERPROC

2008-10-27 Thread Simon Chang
 Raising PMAP_SHPGPERPROC works most of the time.  You can also re-tune
 your Apache setting to keep processes from constantly spawning and
 dying.  For example, set the max spare and min spare servers settings
 higher, so Apache keeps more spare servers around instead of spawning
 them on demand and killing them when the demand ends.

 Another option is to upgrade to 7.X, which seems to have replaced the
 mechanism by which this is done to be more dynamic and not have this
 problem.

Since he has only this server as production, and does not like
re-compiling the kernel on it (and rebooting), the only option that's
sensible is retuning Apache and restart services (since an upgrade to
7.X would be even more involved than a kernel rebuild).

By the way, does anyone know whether there is any way to tune
PMAP_SHPGPERPROC using sysctl, or does such button/knob not exist?

SC
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: collecting pv entries -- suggest increasing PMAP_SHPGPERPROC

2008-10-27 Thread Bill Moran
In response to Simon Chang [EMAIL PROTECTED]:
 
 By the way, does anyone know whether there is any way to tune
 PMAP_SHPGPERPROC using sysctl, or does such button/knob not exist?

No.  I've had this discussion with the developer who originally wrote
that code.  The table size is too deep inside the kernel to adjust it
at run time.  The kernel needs to know what it is when it boots, and it
can't change after.

-- 
Bill Moran
http://www.potentialtech.com
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: collecting pv entries -- suggest increasing PMAP_SHPGPERPROC

2008-10-27 Thread Matthew Seaman

Francis Dubé wrote:

Hi everyone,

I'm running a a webserver on FreeBSD (6.2-RELEASE-p6) and I have this 
error in my logs :


collecting pv entries -- suggest increasing PMAP_SHPGPERPROC

I've read that this is mainly caused by Apache spawning too many 
processes. Everyone seems to suggest to decrease the MaxClients 
directive in Apache(set to 450 at the moment), but here's the 
problem...i need to increase it ! During peaks all the processes are in 
use, we even have little drops sometime because there isn't enough 
processes to serve the requests. Our traffic is increasing slowly over 
time so i'm affraid that it'll become a real problem soon. Any tips on 
how I could deal with this situation, Apache's or FreBSD's side ?


Here's the useful part of my conf :

Apache/2.2.4, compiled with prefork mpm.
httpd.conf :
[...]
IfModule mpm_prefork_module
   ServerLimit 450
   StartServers  5
   MinSpareServers   5
   MaxSpareServers  10
   MaxClients  450
   MaxRequestsPerChild   0
/IfModule

KeepAlive On
KeepAliveTimeout 15
MaxKeepAliveRequests 500
[...]


You don't say what sort of content you're serving, but if it is
PHP, Ruby-on-Rails, Apache mod_perl or similar dynamic content then 
here's a very useful strategy.


Something like 25-75% of the HTTP queries on a dynamic web site will
typically be for static files: images, CSS, javascript, etc.  An
instance of Apache padded out with all the machinery to run all that
dynamic code is not the ideal server for the static stuff.  In fact,
if you install one of the special super-fast webservers optimised
for static content, you'll probably be able to answer all those 
requests from a single thread of execution of a daemon substantially

slimmer than apache.  I like nginx for this purpose, but lighttpd
is another candidate, or you can even use a 2nd highly optimised 
instance of apache with almost all of the loadable modules and other 
stuff stripped out.


The tricky bit is managing to direct the HTTP requests to the appropriate 
server.  With nginx I arrange for apache to bind to the
loopback interface and nginx handles the external network i/f, but
the document root for both servers is the same directory tree.  Then
I'd filter off requests for, say, PHP pages using a snippet like so
in nginx.conf:

   location ~ \.php$ {
   proxy_pass   http://127.0.0.1;
   }

So all the PHP gets passed through to Apache, and all of the other content 
(assumed to be static files) is served directly by nginx[1].
It also helps if you set nginx to put an 'Expires:' header several
days or weeks in the future for all the static content -- that way
the client browser will cache it locally and it won't even need to
connect back to your server and try doing an 'if-modified-since' HTTP
GET on page refreshes.

The principal effect of this is that Apache+PHP basically spends all 
it's time doing the heavy lifting it's optimised for, and doesn't get distracted by all the little itty-bitty requests.  So you need fewer
apache child processes, which reduces memory pressure and to some 
extent competition for CPU resources.


An alternative variation on this strategy is to use a reverse proxy
-- varnish is purpose designed for this, but you could also use squid
in this role -- the idea being that static content can be served mostly
out of the proxy cache and it's only the expensive to compute dynamic
content that always gets passed all the way back to the origin server.

You can also see the same strategy commonly used on Java based sites,
with Apache being the small-and-lightning-fast component, shielding
a larger and slower instance of Tomcat from the rapacious demands of 
the Internet surfing public.


Cheers,

Matthew

[1] Setting 'index index.php' in nginx.conf means it will DTRT with
   directory URLs too.

--
Dr Matthew J Seaman MA, D.Phil.   7 Priory Courtyard
 Flat 3
PGP: http://www.infracaninophile.co.uk/pgpkey Ramsgate
 Kent, CT11 9PW



signature.asc
Description: OpenPGP digital signature


Re: collecting pv entries -- suggest increasing PMAP_SHPGPERPROC

2008-10-27 Thread Bob Johnson
On 10/27/08, Simon Chang [EMAIL PROTECTED] wrote:
 Raising PMAP_SHPGPERPROC works most of the time.  You can also re-tune
[...]
 By the way, does anyone know whether there is any way to tune
 PMAP_SHPGPERPROC using sysctl, or does such button/knob not exist?

It is tunable with a sysctl in AMD64 kernels, but apparently not in
i386. The logged error message in AMD64 mentions the sysctl, at least
in 7.0-R.

-- Bob Johnson
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


collecting pv entries -- suggest increasing PMAP_SHPGPERPROC

2007-09-24 Thread forum
Hi,

I have a new 6.2 install running postfix, amavisd-new, clamav and SpamAssassin
and over the weekend the server stopped responding with the following error.
collecting pv entries -- suggest increasing PMAP_SHPGPERPROC.  I did some google
searching on the error and found the same problem with Apache, but none with my
configuration. Most of the sites say to increase the PMAP_SHPGPERPROC, but none
say how or what to increase it to. 

Does anyone have a suggestion on how I should go about troubleshooting this or
what I should change my PMAP_SHPGPERPROC to? Could this be a one time fluke and 
I
shouldn’t worry about it?

 

Thanks





___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: collecting pv entries -- suggest increasing PMAP_SHPGPERPROC

2007-09-24 Thread Bill Moran
In response to [EMAIL PROTECTED]:

 Hi,
 
 I have a new 6.2 install running postfix, amavisd-new, clamav and SpamAssassin
 and over the weekend the server stopped responding with the following error.
 collecting pv entries -- suggest increasing PMAP_SHPGPERPROC.  I did some 
 google
 searching on the error and found the same problem with Apache, but none with 
 my
 configuration. Most of the sites say to increase the PMAP_SHPGPERPROC, but 
 none
 say how or what to increase it to.

I've never seen a modern version of FreeBSD lock up as a result of this,
so that's a little odd.

 Does anyone have a suggestion on how I should go about troubleshooting this or
 what I should change my PMAP_SHPGPERPROC to? Could this be a one time fluke 
 and I
 shouldn’t worry about it?

I had to research this earlier this year.  The default is 200, so in my
case, raising the value to 250 solved the problem.  I fixed it by adding
the setting to my kernel config and building a new kernel.  I believe
you can also set it in loader.conf

I haven't tried setting it higher than 250 (haven't had the need) but
I've seen some posts suggesting that setting it too high can cause
kernel panics.  I recommend bumping it to 250, then go to 300 if the
problem doesn't go away -- but in any event, don't increase it
drastically.

-- 
Bill Moran
http://www.potentialtech.com
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: collecting pv entries -- suggest increasing PMAP_SHPGPERPROC

2007-09-24 Thread forum
It looks like a process associated with Postfix is eating up all of the memory
and crashing the system. Im trying to find out which one now.

Thanks for you help.

Thron

On Mon Sep 24  9:13 , Bill Moran  sent:

In response to [EMAIL PROTECTED]:

 Hi,
 
 I have a new 6.2 install running postfix, amavisd-new, clamav and 
 SpamAssassin
 and over the weekend the server stopped responding with the following error.
 collecting pv entries -- suggest increasing PMAP_SHPGPERPROC.  I did some 
 google
 searching on the error and found the same problem with Apache, but none with 
 my
 configuration. Most of the sites say to increase the PMAP_SHPGPERPROC, but 
 none
 say how or what to increase it to.

I've never seen a modern version of FreeBSD lock up as a result of this,
so that's a little odd.

 Does anyone have a suggestion on how I should go about troubleshooting this 
 or
 what I should change my PMAP_SHPGPERPROC to? Could this be a one time fluke 
 and I
 shouldn’t worry about it?

I had to research this earlier this year.  The default is 200, so in my
case, raising the value to 250 solved the problem.  I fixed it by adding
the setting to my kernel config and building a new kernel.  I believe
you can also set it in loader.conf

I haven't tried setting it higher than 250 (haven't had the need) but
I've seen some posts suggesting that setting it too high can cause
kernel panics.  I recommend bumping it to 250, then go to 300 if the
problem doesn't go away -- but in any event, don't increase it
drastically.

-- 
Bill Moran
http://www.potentialtech.com



___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


collecting pv entries -- suggest increasing PMAP_SHPGPERPROC

2005-06-16 Thread Tuc at T-B-O-H
Hi,

I just got this on one of our machines It talks about Apache
being the issue, but when I run the ipcs -a ; sysctl vm.zone | grep PV
I get :

odin# ipcs -a ; sysctl vm.zone | grep PV 
Message Queues:
T ID KEYMODE   OWNERGROUP  CREATOR   CGROUP CBYTES  
QNUM QBYTES LSPID LRPIDSTIMERTIMECTIME

Shared Memory:
T ID KEYMODE   OWNERGROUP  CREATOR   CGROUP NATTCH  
SEGSZ  CPID  LPIDATIMEDTIMECTIME
m  65536 1936028777 --rw-rw-rw- setiathome setiathome setiathome setiathome 
 1 131224607  34106  1:29:54 19:29:53 19:56:57

Semaphores:
T ID KEYMODE   OWNERGROUP  CREATOR   CGROUP NSEMS
OTIMECTIME
s  65536 1936028777 --rw-rw-rw- setiathome setiathome setiathome setiathome 
 1  1:29:54 19:56:57

PV ENTRY: 24,  2084665,  39395, 1837920, 5225493555

This is a server thats running a stock SMP kernel, has 4G of memory,
104 total processes, of which 4 are setiathome, and gets maybe 50 hits A DAY.

I saw where you could sysctl a fix, but it didn't seem to mention
a guideline for the setting.

Where should I go, or can I just let it go for now?

Thanks, Tuc
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: collecting pv entries -- suggest increasing PMAP_SHPGPERPROC

2003-08-14 Thread Markie
Hi,

I have had this error once before, though it seemed to freeze/panic the
machine, I think it may have been related to Apache or PHP since I was doing
a 'stress test' at the time. I would guess that you can up PMAP_SHPGPERPROC
in the kernel, perhaps a sysctl.

Markie

- Original Message -
From: admin [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Tuesday, August 12, 2003 2:23 PM
Subject: collecting pv entries -- suggest increasing PMAP_SHPGPERPROC



 FreeBSD 4.8 Stable

 Hi there,

 I am seeing the following log entry in my /var/log/messages

 any clue what I can do to cure this issue?


  snip 

 Aug 12 03:00:55 /kernel: pmap_collect: collecting pv entries -- suggest
 increasing PMAP_SHPGPERPROC


 - snip 
 - noah

 ___
 [EMAIL PROTECTED] mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-questions
 To unsubscribe, send any mail to
[EMAIL PROTECTED]


___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


collecting pv entries -- suggest increasing PMAP_SHPGPERPROC

2003-08-14 Thread admin

FreeBSD 4.8 Stable

Hi there,

I am seeing the following log entry in my /var/log/messages

any clue what I can do to cure this issue?


 snip 

Aug 12 03:00:55 /kernel: pmap_collect: collecting pv entries -- suggest
increasing PMAP_SHPGPERPROC


- snip 
- noah

___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: collecting pv entries -- suggest increasing PMAP_SHPGPERPROC

2003-08-14 Thread admin
On Tue, 12 Aug 2003 21:44:20 +0100, Markie wrote
 Hi,
 
 I have had this error once before, though it seemed to freeze/panic the
 machine, I think it may have been related to Apache or PHP since I 
 was doing a 'stress test' at the time. I would guess that you can up 
 PMAP_SHPGPERPROC in the kernel, perhaps a sysctl.
 



Hi,

looks like the box is stressing out here.  Does anybody know some magic
numbers to set in the kernel to handle this situation?  Any good web sites on
the matter?

I have 512MB in the box.  I should be able to map more virtual memory but just
dont know how?


- noah



 Markie
 
 - Original Message -
 From: admin [EMAIL PROTECTED]
 To: [EMAIL PROTECTED]
 Sent: Tuesday, August 12, 2003 2:23 PM
 Subject: collecting pv entries -- suggest increasing PMAP_SHPGPERPROC
 
 
  FreeBSD 4.8 Stable
 
  Hi there,
 
  I am seeing the following log entry in my /var/log/messages
 
  any clue what I can do to cure this issue?
 
 
   snip 
 
  Aug 12 03:00:55 /kernel: pmap_collect: collecting pv entries -- suggest
  increasing PMAP_SHPGPERPROC
 
 
  - snip 
  - noah
 
  ___
  [EMAIL PROTECTED] mailing list
  http://lists.freebsd.org/mailman/listinfo/freebsd-questions
  To unsubscribe, send any mail to
 [EMAIL PROTECTED]
 


___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]