--On May 7, 2013 23:41:51 +0300 Mikolaj Golub <[email protected]> wrote:

On Tue, May 07, 2013 at 08:30:06AM +0200, Göran Löwkrantz wrote:
I created a PR, kern/178238, on this but would like to know if anyone
has  any ideas or patches?

Have updated the system where I see this to FreeBSD 9.1-STABLE #0
r250229  and still have the problem.

I am observing an effect that might look like inode leak, which I
think is due free nullfs vnodes caching, recently added by kib
(r240285): free inode number does not increase after unlink; but if I
purge the free vnodes cache (temporary setting vfs.wantfreevnodes to 0
and observing vfs.freevnodes decreasing to 0) the inode number grows
back.

You have only about 1000 inodes available on your underlying fs, while
vfs.wantfreevnodes I think is much higher, resulting in running out of
i-nodes.

If it is really your case you can disable caching, mounting nullfs
with nocache (it looks like caching is not important in your case).

--
Mikolaj Golub

Thanks Mikolaj, mounting the active fs with 'nocache' fixed it, keeping ifree steady.

Any idea how to "fix" this in NanoBSD? The data partition is created with only 1024 i-nodes in the script, so any use that includes file deletion on this r/w area will be bitten.

As the nocache attribute is not valid for device mounts, I see no way to inherit it to the nullfs mounts for this specific partition.

Easiest thing could be to set vfs.wantfreevnodes=0 in the default sysctl.conf, maybe? But will this have implications for non-nullfs filesystems? Only UFS? Even ZFS?

Thanks again,
        Göran




_______________________________________________
[email protected] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "[email protected]"

Reply via email to