Greetings,
I have some questions about optimizing memory usage. I could probably
get
some of these answers myself with more study of the mod_perl / perl source,
but I was hoping someone might just know the answer and be able to tell me :)
First off, am I correct in the assumption that it has been wise even in
mod_perl 1 (under Apache's child-per-request model) to preload all of your
modules for memory savings?
Where exactly do these savings come from if the processes are forked?
Is
there some sort of mmap / shmem way that the Apache children share their Perl
trees? Or perhaps the processes each have all that memory *allocated*
individually, but because of COW pages from the OS, you only need one copy
resident (hence less paging)?
In answering the above questions - are these reasons / behaviors
consistent
with mod_perl 2 under prefork?
Also - as of the current Perl 5.8 series, we're still not sharing /
doing COW
with variable memory (SV/HV/AV) right?
Now as for an optimization question... if the ops in the code tree are
shared, let's suppose I declare this subroutine via a scalar passed to eval
prior to the clone process:
sub _get_Big_Data_Structure {
return {
key => {
nested => ['values'],
lots => {'of' => 'data'},
},
};
}
The thing is that I have a big nested config structure, along with lots
of
other big nested structures. A given request doesn't need *all* of the data,
so I've been brainstorming and thought about writing a "reduce" method that I
would dispatch around my module tree from one of the prior-to-clone handlers
that would take these structures, use Data::Dumper to get their "source code"
form, and eval them into subroutines in some "stash" package.
I don't particularly care if this adds seconds to the Apache startup
time if
it can reduce my runtime memory usage reasonably. I suppose the only other
thing I have to be concerned about if the above idea will work is how long
the "_get_Big_Data_Structure" call would take.
Thoughts?
Thanks,
Chase Venters