On 11/4/07, jpd <[EMAIL PROTECTED]> wrote: > PID USERNAME SIZE RSS STATE PRI NICE TIME CPU > PROCESS/NLWP > 5496 root 298M 78M run 49 0 0:02:20 3.8% hlxserverplus/11 > 5900 webservd 163M 45M sleep 59 0 0:00:09 1.9% httpd/1 > 5884 webservd 166M 43M sleep 59 0 0:00:06 0.0% httpd/1 > 5887 webservd 163M 41M sleep 59 0 0:00:10 0.0% httpd/1 > 5906 webservd 159M 35M sleep 59 0 0:00:01 0.0% httpd/1 > 5888 webservd 165M 34M sleep 59 0 0:00:04 0.0% httpd/1 > 5885 webservd 160M 33M sleep 59 0 0:00:03 0.0% httpd/1 > 5886 webservd 160M 29M sleep 59 0 0:00:03 0.1% httpd/1 > 5902 webservd 160M 29M sleep 59 0 0:00:05 0.0% httpd/1 > 5905 webservd 160M 21M sleep 59 0 0:00:01 0.0% httpd/1 > 5878 root 158M 16M sleep 59 0 0:00:01 0.0% httpd/1 > ...... > ZONEID NPROC SWAP RSS MEMORY TIME CPU > ZONE > 5 37 270M 121M 9.5% 0:01:12 2.1% > www > 4 35 330M 80M 6.3% 0:03:58 3.8% > helix > 0 49 68M 93M 7.3% 0:37:10 0.8% > global > 3 28 37M 53M 4.1% 0:23:21 0.7% icecast1 > > so according to prstat apache is still using about 1g of swap+memory and > more RSS that the zones is?
But you forget that many processes will have shared mappings. If a 10 MB file is mapped by 100 processes, that is only 10 MB used, not 1 GB. The sum of the swap reservation size from pmap -S <all pids> would likely be more in line with the value used for the swap resource control. In the not too distant past, prstat would naively sum the RSS of all the processes in the zone and say that was the RSS for the zone. This can gave ridiculous results when you had something like oracle with an hundreds of processes mapping an SGA (shared memory segment) that was many gigabytes in size. I've seen large systems say that tens of terabytes of RAM were in use when the systems (obviously) had less than a terabyte of RAM. You can also use "vmstat" and "vmstat -p" to confirm that paging activity is high and somewhat characterize what is being paged. -- Mike Gerdts http://mgerdts.blogspot.com/ _______________________________________________ zones-discuss mailing list [email protected]
