Rob wrote: > In particular benchmarks benefit a lot from this ;-)
I only use the benchmarks to try and understand the mechanism. :-) What we are seeing with large memory is that relatively idle systems where there is not enough activity to put a load on memory are losing critical directories. Over the weekend, a 5 GB 64 bit system with low activity was brought down inelegantly. When we tried to bring it back up again, /etc was gone! >If I had time to measure the effect, I would decrease >the maximum number of pages flushed out at once, and >then probably decrease the percentage at which bdflush >kicks in. In 2.4.17, the parameter for number of pages flushed seems to have disappeared and you only have %cache dirty, jiffies to kupdate, secs(?) to dirty, and %cache dirty until bdflush invoked async. What I have found is that setting the %cache dirty and %cache dirty un async to zero are the most secure, but they may acutally degrade performance for the benchmark write. In addition, the overall write rate for the system is 1/2 of what the benchmark gets with the defaults. Has anyone tried changing the BDFLUSH values and with what success?
