--On May 8, 2013 8:35:18 +1000 Dewayne Geraghty <dewayne.gerag...@heuristicsystems.com.au> wrote:

-----Original Message-----
From: owner-freebsd-sta...@freebsd.org
[mailto:owner-freebsd-sta...@freebsd.org] On Behalf Of Mikolaj Golub
Sent: Wednesday, 8 May 2013 6:42 AM
To: Göran Löwkrantz
Cc: Kostik Belousov; freebsd-stable@freebsd.org
Subject: Re: Nullfs leaks i-nodes

On Tue, May 07, 2013 at 08:30:06AM +0200, Göran Löwkrantz wrote:
> I created a PR, kern/178238, on this but would like to know
if anyone has
> any ideas or patches?
>
> Have updated the system where I see this to FreeBSD
9.1-STABLE #0 r250229
> and still have the problem.

I am observing an effect that might look like inode leak, which I
think is due free nullfs vnodes caching, recently added by kib
(r240285): free inode number does not increase after unlink; but if I
purge the free vnodes cache (temporary setting vfs.wantfreevnodes to 0
and observing vfs.freevnodes decreasing to 0) the inode number grows
back.

You have only about 1000 inodes available on your underlying fs, while
vfs.wantfreevnodes I think is much higher, resulting in running out of
i-nodes.

If it is really your case you can disable caching, mounting nullfs
with nocache (it looks like caching is not important in your case).

--
Mikolaj Golub
_______________________________________________
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to
"freebsd-stable-unsubscr...@freebsd.org"


Hi Goran,

After I included Kib's vnode caching patch the performance on my "port
builder" machine, decreased significantly.  The "port builder" is one of
many jails and nullfs is used extensively. I was starving the system of
vnodes.  Increasing the kern.maxvnodes, resulted in better performance
than the original system configuration without vnode caching. Thanks Kib
:)

I don't think you'll run out of vnodes as it is self adjusting (that was
my concern too)

I changed kern.maxvnode to approx 3 times what it wanted and tuned for my
needs. Try it and keep an eye on:
sysctl vfs.numvnodes vfs.wantfreevnodes vfs.freevnodes
vm.stats.vm.v_vnodepgsout vm.stats.vm.v_vnodepgsin

Regards, Dewayne

Hi Dewayne,

I got a few of those too but I didn't connect them with the FW problem as here there seems to be reclaim pressure.

On the FW I get these numbers:
vfs.numvnodes: 7500
vfs.wantfreevnodes: 27936
vfs.freevnodes: 5663
vm.stats.vm.v_vnodepgsout: 0
vm.stats.vm.v_vnodepgsin: 4399

while on the jail systems I get something like this:
vfs.numvnodes: 51212
vfs.wantfreevnodes: 35668
vfs.freevnodes: 35665
vm.stats.vm.v_vnodepgsout: 5952
vm.stats.vm.v_vnodepgsin: 939563

and as far as I can understand, the fact that vfs.wantfreevnodes and vfs.freevnodes are almost the same suggests that we have a reclaim pressure.

So one fix for small NanoBSD systems would be to lower vfs.wantfreevnodes and I will test that on a virtual machine and see if I can get better reclaim.

MVH
        Göran




_______________________________________________
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"

Reply via email to