Hello Dean, Thanks a lot for your email.
Yes I was aware of no lock protocol in PVFS2- and I assume leaving it without any would be ideal . I would not be looking at the multiple MDS design for now- although it suddenly seems like an interesting idea . As I understand , GPFS is not available for anyone to download - or is it? I also tried looking for any MDS comparisons on GPFS versus PVFS2 - I did not find any as yet. I will get back to you when I do consider the multiple MDS on pNFS . Sincerely Nirmal Thacker Dean Hildebrand wrote: > > Hi Nirmal, > > If you are looking for multiple metadata servers with pNFS, it is > definitely *possible*. In our implementation of pNFS with GPFS, every > server can be both a MDS and a DS (data server). > > The real question is whether the underlying file system (pvfs2, gpfs, > gfs, etc) supports NFSv4 from multiple servers. If that support > exists, then pNFS multiple MDS support may also work. There are > several aspects to supporting NFSv4 from multiple servers: > a) Lock coordination: Are NFS client locks sent to each MDS > coordinated? For PVFS2, the answer is no since there is no underlying > lock protocol. > b) Delegation coordination: Which clients have what files delegated > needs to be coordinated. Fortunately the nfsv4 server provides a way > to disable delegations through /proc. > c) Recovery. This is just plain difficult as clients try to > re-acquire their locks, etc. > > My pNFS/PVFS2 code is getting pretty rusty, and so is my memory of > what it supports, but as long as you don't care about the above 3 > things, I think multiple MDS should work. > > Sorry for the delayed reply, fast and bakeathon have taken up my time. > Thanks Murali for the heads up. > > For general questions about pNFS, please email the pnfs mailing list > [EMAIL PROTECTED] > For pNFS/PVFS2 questions (or a willingness to port pNFS to the latest > version of PVFS2!) please email me directly. > > Dean Hildebrand > Research Staff Member > IBM Almaden Research Center > > [EMAIL PROTECTED] wrote on 09/10/2008 > 11:22:25 AM: > > > [image removed] > > > > Re: [Pvfs2-users] Multiple Metadata servers on pNFS/PVFS2 > > > > Nirmal Thacker > > > > to: > > > > Phil Carns > > > > 09/10/2008 11:28 AM > > > > Sent by: > > > > [EMAIL PROTECTED] > > > > Cc: > > > > pvfs2-users > > > > Hi Phil, > > > > Yes, I recently learned that having multiple pNFS MDS servers are not > > possible. > > > > Has there been some work [either some benchmarks/new design] in this > > direction? > > > > Thanks > > Nirmal > > Phil Carns wrote: > > > Hi Nirmal, > > > > > > Are you trying to have multiple pNFS MDS servers, or multiple meta > > > servers in the PVFS volume? > > > > > > I don't believe that the former is supported (but I was hoping someone > > > else would chime in- I have not used the pNFS/PVFS implementation > > > first hand). > > > > > > The latter should work fine without any particular changes to the > > > installation steps except for the PVFS configuration files. > > > > > > In either case I am pretty sure that the bind option to mount is not > > > going to do what you are looking for. > > > > > > -Phil > > > > > > > > > Nirmal Thacker wrote: > > >> Hello, > > >> > > >> I have a question regarding CITI's implementation of pNFS/PVFS2 > > >> > > >> Alternatively you could point me to a mailing lists for users of > > >> pNFS/PVFS2 by the CITI group. I cannot seem to find a forum/group who > > >> discuss issues of pNFS/PVFS2 [if it is not this one] > > >> > > >> My problem is as follows: > > >> > > >> I want to use more than a single Metadata server [MDS] in CITI's > > >> pNFS/PVFS2. I use the instructions described here: > > >> > > >> http://www.citi.umich.edu/projects/asci/pnfs/docs/pnfspvfs2.html > > >> > > >> I understand these are not of the current build, but I am using > an older > > >> build of pNFS/PVFS2. > > >> > > >> Now if there are two or more MDS nodes, then how will clients > mount the > > >> MDS nodes on a single mount point? I am familiar with the mount > --bind > > >> argument > > >> [http://www.redhatmagazine.com/2007/03/19/how-do-i-mount-an- > > nfsv4-filesystem-in-two-locations-on-a-single-client/]. > > >> > > >> Is this what we must do? > > >> > > >> This situation arises on two occasions > > >> 1. When the MDS mounts a PVFS2 filesystem. Multiple MDS then > mount the > > >> MDS PVFS2 filesystems at one point. > > >> > > >> To quote from the instructions : > > >> > > >> // Load kernel interface > > >> // Run on MDS > > >> insmod /usr/local/bin/pvfs2-server/pvfs2.ko > > >> /usr/local/bin/pvfs2-server/sbin/pvfs2-client -p > > >> /usr/lo:cal/bin/pvfs2-server/sbin/pvfs2-client-core > > >> mount -t pvfs2 tcp://"mds_server_hostname":3334/pvfs2-fs /mnt/pvfs2 > > >> > > >> 2. When the clients mount an NFS4 filesystem [essentially a PVFS2 > > >> filesystem exported by the MDS] > > >> > > >> To quote: > > >> > > >> Step 7: Mount PVFS2 file system using pNFS > > >> > > >> * Copy /etc/pvfs2tab file to every client > > >> * Example: > > >> > > >> // On client > > >> insmod /usr/local/bin/pvfs2-layout/pvfs2-pnfs.ko > > >> /usr/local/bin/pvfs2-layout/sbin/pvfs2-client -p > > >> /usr/local/bin/pvfs2-layout/sbin/pvfs2-client-core > > >> mount -t nfs4 "mds_server_name":/ /mnt/nfs4/ > > >> > > >> > > >> In both cases, multiple MDS cause the mounts to overlap. Although > I have > > >> not tried this out, I suspect this is would result in an incorrect > > >> configuration. > > >> > > >> Have you come across this issue when you worked with pNFS/PVFS2 > before? > > >> > > >> Thanks for your time. > > >> > > >> Nirmal Thacker > > >> > > >> _______________________________________________ > > >> Pvfs2-users mailing list > > >> [email protected] > > >> http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users > > > > > > > _______________________________________________ > > Pvfs2-users mailing list > > [email protected] > > http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users _______________________________________________ Pvfs2-users mailing list [email protected] http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users
