I’m trying to find out how much more load (in terms of memory) we can place on 
our server.  It’s a T6320 (2 * 8 core) with 64GB running Solaris 10 5/09 and 
Oracle with 20 databases over about 15 zones.  We need to move more Oracle 
databases onto this server.

Here is the output of echo "::memstat" | mdb –k:

Page Summary                Pages                MB  %Tot
------------     ----------------  ----------------  ----
Kernel                    1112093              8688   14%
ZFS File Data               33450               261    0%
[b]Anon                      6269236             48978   76%[/b]
Exec and libs              125767               982    2%
Page cache                 490255              3830    6%
Free (cachelist)           109968               859    1%
Free (freelist)             79794               623    1%

Total                     8220563             64223
Physical                  8189775             63982

The largest component is anonymous memory, so I need to break this down into 
it’s components to see if I have memory available for my new databases.  Here 
is some information about current memory usage:
- 19GB in Oracle locked shared memory (obtained via running ipcs –a over all 
zones)
- 1GB for tmpfs usage (which I’m guessing is part of anonymous memory)
- 15GB marked as anonymous from a pmap –x over all processes (that took 3 hours 
to run!!) - mostly Java processes
- 8GB for ZFS (from the kstat zfs:0:arcstats:size command)
43GB Total (so far).

I note that “ZFS File Data” is consuming only 261MB from ::memstat output, yet 
“kstat zfs:0:arcstats:size” shows that its 8GB, so I’m guessing that this is 
all in anonymous memory too (the ::memstat output seems very confusing to me – 
would be better to show a single total for all ZFS related pages).  What does 
“ZFS File Data” actually refer to?  On other servers we have this figure can be 
up to 60% of physical RAM.

This all took me a long time to work out, and hours of system time.  And I've 
not yet accounted for the remainder of anonymous memory (6GB).

Is there a better way to work this out and give an accurate breakdown of where 
all my anonymous memory is going?

I realise we could just continually increase memory load until the page scanner 
reaches some threshold, but this is not really an acceptable approach.

James.
-- 
This message posted from opensolaris.org
_______________________________________________
opensolaris-discuss mailing list
opensolaris-discuss@opensolaris.org

Reply via email to