Greetings, I had been a part of a team that has done this twice. Once at NASA Goddard Space Flight Center Hydrological Sciences Branch and one more time at the Center for Research on Environment & Water. Both times were successful experiences I thought.
We used commercial off-the-shelf PC hardware and managed switches to build a beowulf-style cluster consisting of compute nodes, OSS and MDS nodes. The OSS and the MGS/MDS units were separate as per the recommendation of the Lustre team. The back-end storage OST units were 4U boxes containing sATA disks connected to the OST via CX4 (I think) cables. We used Perc6/i RAID and the corresponding MegaCLI64 s/w tool on the OSS units to manage the disks within. The OS was Red Hat-based CentOS 4 and upgraded before I left to CentOS 5.5. The OST disks were formatted in the Lustre Cluster file system. We were able to successfully export the Lustre mount-points via NFS from the main client box. We used the data on the Lustre file system to produce and display Earth science images on an ordinary web interface (using a combination of IDL proprietary imaging software and the freely available GrADS imaging software from IGES). We chose Lustre cluster files system for the project because of its price point (Free/Open-Source -- FOSS) and the fact that it performed better for our purposes than GFS and our test of the, back then early, glustre. Just a data point for you. megan _______________________________________________ Lustre-discuss mailing list [email protected] http://lists.lustre.org/mailman/listinfo/lustre-discuss
