I am trying to get good performance with parallel writing to one file through 
MPI. Our cluster has high performance when I write to separate files, but when 
I use one file - I see very little performance increase.

As I understand, our cluster defaults to use one OST per file. There are many 
OST's though, which is how we get good performance when writing to multiple 
files. I have been using the command

 lfs setstripe 

to change the stripe count and block size. I can see that this works, when I do 
lfs getstripe, I see the output file is striped, but I'm getting very little 
I/O performance when I create the striped file. 

When working from hdf5 and mpi, I have seen a number of references about tuning 
parameters, I haven't dug into this yet. I first want to make sure lustre has 
the high output performance at a basic level. I tried to write a C program uses 
simple POSIX calls (open and looping over writes) but I don't see much increase 
in performance (I've tried 8 and 19 OST's, 1MB and 4MB chunks, I write a 6GB 
file). 

Does anyone know if this should work? What is the simplest C program I could 
write to see an increase in output performance after I stripe? Do I need 
separate processes/threads with separate file handles? I am on linux red hat 5. 
I'm not sure what version of lustre this is. I have skimmed through a 450 page 
pdf of lustre documentation, I saw references to destructive testing one does 
in the beginning, but I'm not sure what I can do now. I think this is the first 
work we've done to get high performance when writing a single file, so I'm 
worried there is something buried in the lustre configuration that needs to be 
changed. I can run /usr/sbin/lcntl, maybe there are certain parameters I should 
check? 

best,

David Schneider
_______________________________________________
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org

Reply via email to