I think I am misunderstanding something about Solaris virtual memory...

I was running some tests on our new T5120 with 16Gb RAM with Solaris 10u6 
(and previously got in the same situation with an X4600 and OpenSolaris 105):
when the systems arrived, we assumed they have "lots of RAM" for our tasks
so they don't need swap enabled (disk activity would just slow them down).

However as we tested scaling and put more load on the servers (i.e. allowing 
Apache to spawn more children), we were surprised to see that the system 
doesn't tend to go below 8Gb (half) of the available RAM. Further requests 
for memory allocation failed (can't fork new processes including new sshd, 
vmstat, top and bash).

"Swap space" measured as available space in /tmp is a few megabytes.
However according to the running top and vmstat, "Free memory" is 8Gb.

Then, during the test, I added 16Gb swap. This was instantly seen as available
space in /tmp, and the scaling tests went on. The system did not go into 
swapping to disk yet, but it did successfully deplete the "free memory" down to
hundreds of megabytes.

Hence, my questions: why is so Solaris VM so much like a greedy hamster? 
Why can't Solaris use its available RAM until it is depleted, unless it "knows"
there is some swap waiting just in case - even if this swap is never actually 
used
as far as I can tell?

Also, I believe that the 8Gb of RAM remaining free seems like some threshold
(like 50% of total RAM). Is this so? Is it configurable (i.e. can I request the 
OS
to leave 10% free)? What's the point of reserving it, if this RAM is never 
handed 
out to (userspace?) processes?

//Jim
-- 
This message posted from opensolaris.org
_______________________________________________
opensolaris-discuss mailing list
opensolaris-discuss@opensolaris.org

Reply via email to