On Tue, Feb 24, 2004, Henrik Nordstrom wrote: > On Tue, 24 Feb 2004, Adrian Chadd wrote: > > > Here's my latest patch. I've broken out the parsing/request initialisation > > code from commReadRequest() into a seperate function which I can then > > call from keepaliveNextRequest(). Please review and comment. > > I've tested it locally and it seems to work just fine. > > Looks good, except that I would prefer to have the do_next_read thing > eleminated. Either by moving "all" related logics down to > clientParseRequest or by moving it completely out and returning different > status depending the result (parsed, needs more data, failed).
*nod* It doesn't look like a "trivial" fix. Would you mind if I committed the current work, sans re-working the do_next_read flag, so it gets some testing? I'm trying to get squid-3 stable before I jump in to try and improve someo f the code. > > http://cyllene.uwa.edu.au/~adrian/bt .. have a look. That particular > > node had about 37,000 entries.. Squid took up a good 1.2gig of RAM. > > It _did_ recover after about 15 minutes but the memory was so > > fragmented I needed to restart to get any decent performance.. > > Ugh.. is that a back trace? any backtrace beyond depth 20 in Squid is > a defenite sign of broken design.. Heh. Yup. > Unfortunately I am not yet familiar with how mem_node operates, why such > massive buildup of entries can happen or why it is recursing in your > trace. Robert? I wouldn't blame Robert just yet.. I haven't changed any of the code relating to this, there may be some boundary case which hasn't been thought of.. I _was_ mirroring a local FTP server which far, far too many ISO images on it and I did break the process half way through. Adrian
