On Mon, 7 May 2001, Ralf S. Engelschall wrote:

> BTW, I thought about all this again and think one can solve these issues in a
> portable way: you just allocate a rather large shared memory segmet (or
> multiple shared memory segments) at startup, but do not touch the mapped
> memory at all until it is actually used.  The idea is that unless you touch
> the memory segments they do not cost any real memory, but the memory mapping
> is already present in all forked processes from the start. How about this?

there's a system-wide sysv shm limit on sysv derivatives and linux.

i also think linux kernels pre-2.4 put sysv shm into non-pageable memory
(and i'm not sure that's fixed in 2.4 either, i haven't checked)... it's
sort of intended for small segments.  linux pre 2.4 doesn't have anonymous
shared mappings (and again i think it's a new feature as of 2.4, but i'm
too lazy to make sure, anonymous private mappings have been there for ages
and are actually used by malloc).

so i'm kind of thinking you'd be stuck using a file-backed mmap() method
on lots of systems...  which means extra disk i/o and loss of performance.

oh hmm i don't know anything about posix shm, but that's new in linux 2.4.
and maybe anon shared mappings instead of sysvshm on solaris is OK.  (i
assume all the BSDs use anon shared mappings.)  maybe you don't have to go
to files after all.

i'm not really sure what info it is you're putting into the shared-memory
that requires lots of memory / dynamic sizing.  all i can think of is the
scoreboard and SSL session ids.  the scoreboard can be architected to not
require dynamic sizing (may require MPM code to answer a scoreboard query
for each process).

how big do SSL session caches get anyhow?

-dean

Reply via email to