Extra, I zero'ed /proc/fs/lustre/llite/fs0/read_ahead_stats and re- ran the test and checked the read ahead stats afterward... and they were all zero. So, I assume this means that Lustre isn't doing any read ahead in this case.

On 17/07/2007, at 9:49 AM, Stuart Midgley wrote:

We are seeing really bad performance from a java app and it boils down to
poor performance from 1-byte reads from a Lustre file system.  After a
detailed strace of the application running, I have generated the following
code snippet which demonstrates the problem

#include <sys/stat.h>
#include <fcntl.h>
#include <unistd.h>

int main(int argc, char **argv) {
        int fd=open("5GB_file", O_RDONLY, 0666);
        int i,j;
        char b[1];

        for(i=0; i<10000; i++) {
                lseek(fd, i*500, SEEK_SET);
                for(j=0; j<100; j++) {
                        read(fd, b, 1);
                }
        }
        close(fd);
}


5GB_file is just a dd if=/dev/zero of=5GB_file bs=1024k count=5000 .

Anyway, this code runs in <1s on local disk, ~5s on NFS and >30s on
Lustre...  I was disappointed to see Lustre slower than nfs.   I was
hoping that Lustre's read-a-head would have been triggered by this code,
but it doesn't appear to be.  Any way I can tune Lustre to work better
with this code? (I know, change the code, but it isn't that easy - this is a c example of an strace of a java app - and changing the original java
app isn't so easy).

Thanks
Stu.



--
Dr Stuart Midgley
[EMAIL PROTECTED]



_______________________________________________
Lustre-discuss mailing list
[email protected]
https://mail.clusterfs.com/mailman/listinfo/lustre-discuss

Reply via email to