On Sunday, 23rd September 2001, Poul-Henning Kamp wrote:
Things to look out for:
1. !ufs filesystems
I am irredeemably slack for not testing this a lot but...
I believe I saw bad interactions between vmiodirenable and isofs on 4.3-R.
I mounted a CD, looked at stuff on it, did a lot of other
:
:On Sunday, 23rd September 2001, Poul-Henning Kamp wrote:
:
:Things to look out for:
:
:1. !ufs filesystems
:
:I am irredeemably slack for not testing this a lot but...
:
:I believe I saw bad interactions between vmiodirenable and isofs on 4.3-R.
:
:I mounted a CD, looked at stuff on it, did a
On Sun, Sep 23, 2001 at 02:09:46PM -0700, Matt Dillon wrote:
:Has the problem of small-memory machines ( 64M IIRC) solved now? As I
:understand it vmiodirenable is counter-productive for these boxes.
:Maybe one could decide on-boot whether the amount of mem is enough to
:make it useful?
:
:Then I suggest the following to be changed:
:
:#
:# This file is read when going to multi-user and its contents piped thru
:# ``sysctl'' to adjust kernel values. ``man 5 sysctl.conf'' for details.
:#
:
:# $FreeBSD: src/etc/sysctl.conf,v 1.5 2001/08/26 02:37:22 dd Exp $
:
On Wed, Sep 26, 2001 at 12:32:33PM -0700, Matt Dillon wrote:
:
:Then I suggest the following to be changed:
:
:#
:# This file is read when going to multi-user and its contents piped thru
:# ``sysctl'' to adjust kernel values. ``man 5 sysctl.conf'' for details.
:#
:
:# $FreeBSD:
On Wed, Sep 26, 2001 at 10:57:07PM +0200, Wilko Bulte wrote:
On Wed, Sep 26, 2001 at 12:32:33PM -0700, Matt Dillon wrote:
:
:Then I suggest the following to be changed:
:
:#
:# This file is read when going to multi-user and its contents piped thru
:# ``sysctl'' to adjust kernel
Well, this has turned into a rather sticky little problem. I've
spent all day going through the vnode/name-cache reclaim code, looking
both at Seigo's cache_purgeleafdirs() and my own patch.
This is what is going on: The old code refused to reuse any vnode that
had (A)
Well, this has turned into a rather sticky little problem. I've
spent all day going through the vnode/name-cache reclaim code, looking
both at Seigo's cache_purgeleafdirs() and my own patch.
Can you forward me your patch? I'd like to try it out on some machines in
the TSI lab.
: Hmmm. This would seem to be a step back to the days when caching was done
:relative to the device as opposed to the file-relative scheme we have now.
:One of the problems with the old scheme as I recall is that some filesystems
:like NFS don't have a 'device' and thus no physical block
In message [EMAIL PROTECTED], Matt Dillon writes:
My patch doesn't make a distinction but assumes that (A) will tend to
hold for higher level directories: that is, that higher level directories
tend to be accessed more often and thus will tend to have pages in the
VM Page
:VM Page Cache, and thus not be candidates for reuse anyway. So my patch
:has a very similar effect but without the overhead.
:
:Back when I rewrote the VFS namecache back in 1997 I added that
:clause because I saw directories getting nuked in no time because
:there were no pages
On Sun, Sep 23, 2001 at 03:40:33AM -0700, Matt Dillon wrote:
:VM Page Cache, and thus not be candidates for reuse anyway. So my patch
:has a very similar effect but without the overhead.
:
:Back when I rewrote the VFS namecache back in 1997 I added that
:clause because I saw
In message [EMAIL PROTECTED], Matt Dillon writes:
Ah yes, vmiodirenable. We should just turn it on by default now. I've
been waffling too long on that. With it off the buffer cache will
remember at most vfs.maxmallocspace worth of directory data (read: not
very much), and
I ran one of my trivial benchmarks here: make -j 12 buildworld
on a dual 866MHz P3 with 640M RAM.
Most of the stuff reported by /usr/bin/time -l here is useless:
almost all the numbers are all inside the standard deviation I have
recorded for them on this box.
Block input operations is the
:Block input operations is the one notable exception and it tells a
:very interesting story: Matts patch results in a 4% increase, but
:combined with vmdirioenable it results in a 21.5% decrease.
:
:That's pretty darn significant: one out of every five I/O have
:been saved.
:
:The reason it has
:Has the problem of small-memory machines ( 64M IIRC) solved now? As I
:understand it vmiodirenable is counter-productive for these boxes.
:Maybe one could decide on-boot whether the amount of mem is enough to
:make it useful?
:
:Just a thought of course.
:
:| / o / /_ _
Matt Dillon wrote:
:Block input operations is the one notable exception and it tells a
:very interesting story: Matts patch results in a 4% increase, but
:combined with vmdirioenable it results in a 21.5% decrease.
:
:That's pretty darn significant: one out of every five I/O have
:been
:Notice that both the user and system times increased..
:if there had been another parallel task, the overall system throughout may have
:decreased..
:
:I'm not saying this is wrong, just that we should look at other workloads too.
:no point in optimising the system for compiling itself.. that's
In message [EMAIL PROTECTED], Julian Elischer writes:
Matt Dillon wrote:
:Block input operations is the one notable exception and it tells a
:very interesting story: Matts patch results in a 4% increase, but
:combined with vmdirioenable it results in a 21.5% decrease.
:
:That's pretty darn
19 matches
Mail list logo