Hi folks,
I'm wondering if recent improvements and plans to reduce client
memory consumption could help define a minimum memory constraint,
i.e. "one must have at least xxMB of memory" or similar? This would
be useful for users of zones with memory caps who want to use
pkg within the zone - or is the advice simply not to do this and
instead do "pkg -R <zone_root> <command>" in the global zone?
Of course, it's not just about zones, I guess it could apply
to embedded environments with small memory configs, it was
just that the zones example was how the question came up...
Background:
A handy feature of zones is that the overhead of the zone
itself is remarkably lightweight, meaning that one can do
quite a lot with relatively little overhead (i.e. a basic
apache2 zone can serve casual traffic to a simple web
site in <80MB memory and ~0.1% cpu). Great!
A customer at a recent Oracle welcome event was telling me
how they provision zones as "virtual private servers" and
use resource caps as pricing differentials...an entry
level virtual private server product being a simple hosting
zone with a 128MB memory cap, more than enough for apache2
and your average home user/cottage industry web site.
Occasionally, it may be necessary to install additional
s/w in the zone and it seems that running "pkg install" in
a zone with a memory cap of <512MB often fails due to
virtual memory starvation, at least up to and including
build 134.
I realize that getting pkg to cope with tight memory
constraints like this may not be practical, or it could
be that the performance trade-off would be unacceptable.
I'm just wondering what the current thinking is on this?
Cheers,
Jimmo
_______________________________________________
pkg-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/pkg-discuss