Hi,

We have a live environment of 3 machines running ND 4.13 on Solaris 2.6,
configured into 1 cell. We had 5 CP processes running and the maximum amount
of  memory is set to 200M in the java command line. Each of the machine has
4 processors and 1 GB of RAM.

We have a total of about 50 sizeable projects running with about 5 essential
project preloaded. We've noticed that after running for some time, where
most of the projects would already be loaded into memory, the CP process
never went up to the full 200M or anything close to that (it hovers around
100M).

Another remarkable thing is that some of the projects that has presumely
been loaded in all CPs seem to be unloaded by ND judging from the fact the
project loading takes place when you access the project again after
accessing say 10 other projects.

While we agree that ND has to dereference objects at some point of time
before the CP process exceeds the maximum memory allocated to the VM, we are
wondering what kind of strategy or algorithm ND adopts for project
unloading. Is it anything to do with the number of projects loaded into
memory, or the free memory size of the heap?

Can someone shed some light on this?


Best regards
Darren

_________________________________________________________________________

For help in using, subscribing, and unsubscribing to the discussion
forums, please go to: http://www.netdynamics.com/support/visitdevfor.html

For dire need help, email: [EMAIL PROTECTED]

Reply via email to