Howdy all, I had a question from a colleague and did not have a ready answer. What is the community's experience with putting together a small and inexpensive cluster that serves Lustre from (some of) the compute nodes's local disks?
They have run some simple tests with using a) just local disc, b) simple NFS service mounted to compute nodes, and c) Lustre with OSS and MDS on the same node. A typical workload for them is to compile the "Visit" visualization package. On a local disk this takes 2 to 3 hours. On NFS it was closer to 24 hours, and on the small Lustre example it was about 5 hours. Now they'd like to go a little further and try to find a Lustre solution that would improve performance as compared to local disk. Their workload will be mostly metadata intensive rather than bulk I/O intensive. Is there any experience like that out there? Cheers, Andrew Notes from the one asking the question: ---------------------------------------------------------- What I would like to do now is to develop the cheapest small cluster possible that still has good I/O performance. NetAps raise the cost significantly. Also, I think the whole system must come out of the box with the application and all dependencies built and good I/O. So one possible way would be a system with a head node and N compute nodes, each with multiple CPUs and cores, of course. I can then imagine a Lustre file system with the MDS on the head node and perhaps M OSSs on the compute nodes, which then serve up their local disks. Of course, now the compute nodes are running both the computational application (on all cores likely) and 0 or 1 OSS. It sounds like from what you are saying that at a minimum I would need two interfaces per node: one over which the MPI communication goes for the apps, and one for serving the Lustre file system on those nodes which are serving that. Is this right? Is this a reasonable direction to go? (Having both OSS and computation on some nodes.) Are there examples of good systems designs out there?
_______________________________________________ Lustre-discuss mailing list [email protected] http://lists.lustre.org/mailman/listinfo/lustre-discuss
