On Thu, May 18, 2006 at 01:37:47AM +0200, Albert Shih wrote:
>  Le 17/05/2006 ? 18:17:54-0400, Charles Swiger a ?crit
> > On May 17, 2006, at 5:35 PM, Albert Shih wrote:
> > >I search some technics/command/anything can make very fast ?du?  
> > >especialy
> > >when in the file system there are lot of lot of hard-link.

I know no solution to this... just a few random thoughts:

If you didn't have subdirs and hard links, you could cache the
results of slow-du somewhere, and look up the results there,
updating the cache only if directory m_time(s) changed. Let

du-cache := { (dir-ino, (du-value, timestamp)) |
                  dir-ino is directory inode number [key of cache],
                  du-value is disk usage of dir-ino,
                  taken at timestamp }

But with subdirs, you need to take care of recursion; and that
makes bookkeeping the du-cache somewhat more complicated.

With hard-links, esp. across directories; you need an additional
hard-link cache; and AFAICS there's no way to have that automatically
updated, when a hard-linked file changes size elsewhere...

...unless you decide to add some hooks to VFS(9). But if you go this
route, you could as well hook up the entire fast-du bookkeeping at
VFS level, but that's most likely a major undertaking (if you
do, remember quota(1)).

If you don't need absolute accuracy 100% of the time, you could
build a du-cache once every few days or so, just like locate(1)'s
database; and use directory timestamps to incrementally update it,
so it would only take a lot of time the first time to build, and
hopefully relatively less time subsequently (depending on usage
pattern, of course).

> Albert SHIH
> Universite de Paris 7 (Denis DIDEROT)
> U.F.R. de Mathematiques.
> 7 i?me ?tage, plateau D, bureau 10


Cordula's Web. http://www.cordula.ws/
freebsd-questions@freebsd.org mailing list
To unsubscribe, send any mail to "[EMAIL PROTECTED]"

Reply via email to