Wolfgang Alper:
> Great idea, pls find attached the requested traces.
        :::
> - The branches themselves react as expected, regardless of the requested 
> subdir structure.
> - aufs becomes slower using a more branches
> - requesting subdirs through aufs makes it slower, directly dependig on nr of 
> branches
> - The ftp server showed significant more requests when:
>   - branches changed from 8 to 81
>   - when using ls on subdirs ( the deeper the more requests)
>
> This seems to provide evidence that all branches a re-accessed and that this 
> happen on each subdir.
> (I know that this is expected behavoiur for a unifying FS)

As I wrote before, the type of access is important. In "ls", the core
operation is opendir(3) and readdir(3). The forer issues open(2) which
includes re-validating and opening the dir on all branches, and the
latter issues getdents(2) which includes reading entries from the all
opened dir and merging/whiteouting.
The cache which you are focusing currently is a feature to eliminate the
cost of readdir. It keeps the previously readdir-ed result in memory and
re-use it for the succeeding readdir call.
In other words, if opendir(3) costs high, then caching the result of
readdir will never help.
Actually open(2) in your log took longer time than getdents(2).

81 branches ls /mnt/aufs/00/00
open("/mnt/aufs/00/00", ...) = 3 <65.986979>
getdents64(3, /* 3 entries */, 32768)   = 72 <44.129202>

81 branches ls /mnt/aufs/00/00/00/00
open("/mnt/aufs/00/00/00/00", ...) = 3 <126.054477>
getdents64(3, /* 3 entries */, 32768)   = 72 <31.655094>

81 branches ls /mnt/aufs/00/00/00/00/00/00
open("/mnt/aufs/00/00/00/00/00/00", ...) = 3 <181.638520>
getdents64(3, /* 82 entries */, 32768)  = 1968 <28.858364>

It indicates that the caching in aufs is not the cause the problem of
your problem.
While you are repeating the word "re-access", but such word sounds less
meaningful here. It is important that the type of operation.


While I don't know which version of curlftpfs you are using, I've tried
the latest CVS version on sourceforge.
I could see two issues.

1.
When I just open a dir on curlftpfs, the LIST ftp command was issued. It
means we have to wait a long time before readdir.

2.
The inode number of curlftpfs is broken.
$ sudo ./curlftpfs -o nonempty ftp://ftp.kernel.org /mnt
$ sudo \ls -ali /mnt
total 41
 1 drwxr-xr-x  1 root root 1024 Jan  1  1970 .
 2 drwxr-xr-x 32 root root 1024 Aug 23  2009 ..
 2 lrwxrwxrwx  1 root root   10 Apr 21  2007 .message -> pub/README
 3 drwxrwxrwx  3 root root  109 Jul 25 04:24 bin
 4 dr-xr-xr-x  2 root root   28 Aug 29  1997 dev
 5 d--x--x--x  2 root root   49 May 20  1998 etc
 6 drwxrwx---  2 root root  124 May 21  2001 for_mirrors_only
 7 drwxr-xr-x  2 root root 4096 May 20  1998 lib
 8 drwx------  2 root root    6 Oct  2  2005 lost+found
 9 drwxrwxr-x 11 root root 4096 Feb 12  2009 pub
10 lrwxrwxrwx  1 root root    1 Apr 21  2007 usr -> .
11 lrwxrwxrwx  1 root root   10 Apr 21  2007 welcome.msg -> pub/README

The inode number 2 is duplicated.

$ sudo \ls -i /mnt
4294967295 bin  4294967295 for_mirrors_only  4294967295 pub
4294967295 dev  4294967295 lib               4294967295 usr
4294967295 etc  4294967295 lost+found        4294967295 welcome.msg

The inode number is not maintained at all.

As you might know, many unix tools expect filesystem to maintain the
inode numbers correctly. Some people may assert that there is no such
gurantee on the inode numbers. Yes they will be re-used when it becomes
free. But it is not true in this case. All these inodes are still in-use
and totally un-maintained. Since some parts of aufs rely on the inode
number, aufs cannot work with such broken filesystem.
I don't know whether this is the cause of your problem. But if you
really want to use curlftpfs as an aufs branch, then you should ask its
maintainer to support inode number first.
Also I'd suggest you to trace the behaviour of curlftpfs.


J. R. Okajima

------------------------------------------------------------------------------
This SF.net email is sponsored by Sprint
What will you do first with EVO, the first 4G phone?
Visit sprint.com/first -- http://p.sf.net/sfu/sprint-com-first

Reply via email to