On 05/ 5/11 02:37 AM, Dave Miner wrote:
On 05/ 4/11 09:07 PM, Jack Schwartz wrote:
On 05/ 4/11 07:30 AM, Dave Miner wrote:
On 05/ 3/11 05:15 PM, Jack Schwartz wrote:
...
478, 491: This may have already been discussed, and this may be better
handled as a follow-on bug, but it seems overly-simplistic to calculate
swap and dump to be 1/2 the RAM size. These days some systems have so
much RAM that they have little or no swap. And as systems with
terabytes
of RAM will work their way into the world, do we need swap and dump
that
are half the size of RAM?
These are heuristics that can be useful. dump size can need to be as
large as physical memory, depending on how dumpadm is configured. If
swap isn't going to add significantly to the available virtual memory,
it's probably not that interesting to even bother.
In the past we (or maybe it is ZFS) had a more intelligent algorithm for
calculating swap and dump. Just wondering why that isn't used? After
all, if we're going to provide default swap and dump sizes we may as
well provide some that are potentially more useful.
True, calc_swap_size and calc_dump_size in liborchestrator were somewhat
more complex than this. Some of that seems unnecessary to continue; for
example, capping the values such as they did (especially for dump) is
questionable. I think it's reasonable to expect that most sites will set
this to something they've standardized on.
These swap/dump calculations in this webrev are just place holders, the
real calculations (will be in new webrev), make use of controller.py to
calculate swap/dump requirements, as posted for review by dermot :
http://cr.opensolaris.org/~dermot/controller/
In particular method : calc_swap_dump_size()
cheers
Matt
Dave
_______________________________________________
caiman-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/caiman-discuss
_______________________________________________
caiman-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/caiman-discuss