On Tue, 2008-08-12 at 19:20 +0300, Alex wrote: > > My problem comes below: > > Let say that I have: > - N computers (N>8) sharing their volumes (volX, where X=N). Each volX is > arround 120GB.
What exactly do you mean "sharing their volumes"? > - M servers (M>3) - which are accessing a clustered shared storage volume > (read/write) Where/what is this clustered share storage volume these servers are accessing? > Now, I want: > - to build somehow a cluster file system on top of vol1, vol2, ... volN > volumes Do you mean "disk" or "partition" when you say "volumes" here and are these disks/partitions in the "N computers" you refer to above? > - resulted logical volume to be used on SERVER1, SERVER2 and SERVER3 > (read/write access at the same time) Hrm. This is all quite confusing, probably because you are not yet understanding the Lustre architecture. To try to map what you are describing to Lustre, I'd say your "N computers" are an MDS and OSSes and their 120GB "volumes" are an MDT and OSTs (respectively) and your "M servers" are Lustre clients. > - Using lustre, can i join all volX (exported via iscsi) toghether in one > bigger volume (using raid/lvm) and have a fault-tolerance SHARED STORAGE > (failure of a single drive (volX) or server (computerX) should not bring down > the storage usage)? I don't think this computes within the Lustre architecture. You probably need to review what Lustre does and how it works again. > - I have one doubt regarding lustre: i saw that is using EXT3 on top, which > is > a LOCAL FILE SYSTEM not suitable for SHARED STORAGE (different > computers accesing the same volume and write at the same time on it). This is moot. Lustre manages the ext3 filesystem as it's backing store and provides shared access. > - So, using lustre's patched kernels and tools, ext3 become suitable for > SHARED STORAGE? You probably just need to ignore that Lustre uses ext3 "under the hood" and trust that Lustre deals with it appropriately. b.
signature.asc
Description: This is a digitally signed message part
_______________________________________________ Lustre-discuss mailing list [email protected] http://lists.lustre.org/mailman/listinfo/lustre-discuss
