Hello all, I'm in the process of evaluating Lustre for a project I'm working on, and I'd like to ask for some advice on tuning my configuration for better performance. For my evaluation work, I've got one MGS/MDS and four OSSes each hosting one OST. This storage cluster was put together using some spare nodes that we had from a small, currently unused compute cluster, and the disks are all single scsi drives. All of the Lustre servers are running 2.6.18-92.1.17.el5_lustre.1.8.0smp kernels, and the clients are patchless. All networking is over 1Gb Ethernet.
In our application we have an instrument streaming data to a (compute) cluster, which then does some work and writes results to a file, all of which generally has to occur in real time (that is, keep up with the streaming data). The files are written by processes running on the cluster concurrently; that is, for a particular data set, multiple processes are writing to one file. Due to the way the instrument distributes data to the cluster nodes, as well as the format of output files, each cluster process generally writes a relatively small amount of data in a block, but at a high frequency (about every 10-100ms). It might be important to note that the blocks written by a single process are not in general contiguous. The aggregate data rate being written to the output files is approximately 100MB/s at this time, although this may ramp up considerably at a later date. While my brief testing with IOR showed acceptable write throughput to the Lustre filesystem, I have been unable to achieve anywhere near that figure with our application doing the writes --- I'm concerned that the write pattern being used is a severely limiting factor. In this situation, does anyone have any advice about what I ought to be looking at to improve performance on Lustre? -- Martin _______________________________________________ Lustre-discuss mailing list [email protected] http://lists.lustre.org/mailman/listinfo/lustre-discuss
