With nfs, I cannot have redundency for fault tolerance and file locking problem ( who get
to write first) like DRDB or Opengfs, and also user management ( guess I have to used
ldap + pam to solved this ).

Maybe I incorporated/test  the NAS device method. Just get a server with a hot-swapable
scsi disk and installed linux with software raid 5. This might work for the fault tolerance
storage but will be better to have a redundent power supply also. This will work if the NAS
device won't misbehave. If the NAS device down, then all the server will be affected also,
and the client will unable to access their home directory. If I put two NAS device, then I
have problem to sync both of them.

Din


Derek Dresser wrote:
[EMAIL PROTECTED]">
Furthermore, even it solve
the computing / processing and memory sharing, the storage or filesystem
still cannot be solved.
Even with redhat piranha, the storage of a cluster of server cannot be
seen as one, except if you
have an external scsi/fiberchannel storage devices. This will cost a lot.


Can you explain why NFS won't work for you? For user file storage you can
mount the same /home partition on bothh/all servers. You could also mount
the same partitions with applications, etc if you wanted to. Heck, you
could put everything on a NAS (network attached storage) device (relatively
cheap) and mount the same root file system for multiple servers, or just
boot them from the LTSP server. Is there a reason why that isn't
satisfactory? There probably is, but I'm missing something.

Thanks,
Derek



Reply via email to