On Sun, 2010-02-21 at 22:27 +0100, Henrik Nordström wrote: > lör 2010-02-20 klockan 18:25 -0700 skrev Alex Rousskov: > > > The reasons you mention seem like a good justification for this option > > official existence. I do not quite get the "fork bomb" analogy because > > we are not creating more than a configured number of concurrent forks, > > are we? We may create processes at a high rate but there is nothing > > exploding here, is there? > > With our large in-memory cache index even two concurrent forks is kind > of exploding on a large server. Consider for example the not unrealistic > case of a 8GB cache index.. I actually have some clients with such > indexes.
I have an idea about this.
Consider a 'spawn_helper'.
The spawn helper would be started up early, before index parsing. Never
killed and never started again. It would have, oh, several hundred K
footprint, at most.
command protocol for it would be pretty similar to the SHM disk IO
helper, but for processes. Something like:
squid->helper: spawn stderrfd argv(escaped/encoded to be line & NULLZ
string safe)
helper->squid: pid, stdinfd, stdoutfd
This would permit several interesting things:
- starting helpers would no longer need massive VM overhead
- we won't need to worry about vfork, at least for a while
- starting helpers can be really async from squid core processing (at
the moment everything gets synchronised)
-Rob
signature.asc
Description: This is a digitally signed message part
