On Mon, Mar 22, 2010 at 10:51 PM, Venkatesh Nandakumar <[email protected]> wrote: > Ilug-c members have suggested GlusterFS and openafs, I'd be looking into it > for now, if I receive no other suggestions. (Thanks guys!)
Cluster file systems are generally high maintenance and not to be attempted without the right infrastructure. It's a bad idea to aggregate the unused disk space available on desktops for any type of online use. This may work well for distributed backups however. What happens if someone powers off one of your lab computers? All data residing on the node is inaccessible. Is that something you can control in your environment? Even booting up such systems will become a challenge, just try cross mounting NFS shares on 3 servers and you'll see the dependency issues. For predictable performance and availability, all cluster file systems require dedicated back end storage nodes, gateways, metadata and lock managers. IMO the easiest approach is to use NFS auto mounting. I had data on ~30 workstations shared like this: 1. On each workstation, NFS export the "data" file system in a standard way. eg. /data/$hostname 2. Then use "executable maps" (man autofs) and some creative scripting to auto mount the file systems on demand. NIS/LDAP was too hard for me back when I did this. 3. User will have to "cd /data/$hostname" from any computer and autofs will mount the nfs share on demand. If $hostname is powered of at that time, your users will get an NFS timeout. So NFS soft mounting becomes essential. 4. Ensure you are using NIS/LDAP to synchronize all UID GID numbers. Otherwise file ownership and permissions don't work. It's been nearly a decade since I've last done NFS auto mounts. Do let me know if you require my scripts/config files, I'll have to dig them up. If you want any kind of production quality cluster file system setup, I suggest that you do it using the right infrastructure. - Raja _______________________________________________ ILUGC Mailing List: http://www.ae.iitm.ac.in/mailman/listinfo/ilugc
