Thanks a lot. I'm confused with relative order of configure and install mpi
and orangefs. I'll appreciate if any proven to work installation guides of
mpi&orangefs can be shared by some kindly guys.


2013/10/18 Rob Latham <[email protected]>

> On Thu, Oct 17, 2013 at 03:05:46PM +0800, xihuang sun wrote:
> > I've checked that now MPICH2 had moved itself simply to 'MPICH' project
> and
> > MPICH v3.0 have implement MPI 3.0 standard. which can be used to match
> > orangefs best? Thanks
>
> You can use MVAPICH, MPICH2, or OpenMPI.  The ROMIO PVFS2 driver
> hasn't changed in quite some time, so whichever you use will work fine
> with OrangeFS.
>
> ==rob
>
> >
> >
> > 2013/10/17 xihuang sun <[email protected]>
> >
> > > Thanks, now the situation is that I want to run mpi program over
> > > Infiniband, so MVAPICH2 or OpenMPI ?. It seems that MPICH2 doesn't
> support
> > > Infiniband.
> > >
> > >
> > > 2013/10/16 Rob Latham <[email protected]>
> > >
> > >> On Mon, Oct 14, 2013 at 01:54:26PM +0800, xihuang sun wrote:
> > >> > There is only intel mpi installed on the machines. I don't know
> whether
> > >> > this can be solved with impi. Any more help?
> > >>
> > >> yeah, i'm almost positive impi does not have PVFS support.
> > >>
> > >> You can build your own romio against impi.  that *should* work, but
> > >> it's been a while since I've tried.
> > >>
> > >> You can build your own MPICH or OpenMPI and install it in your home
> > >> directory, even if the system only has intel mpi .
> > >>
> > >> ==rob
> > >>
> > >> >
> > >> > 2013/7/3 Rob Latham <[email protected]>
> > >> >
> > >> > > On Mon, Jul 01, 2013 at 03:36:57PM +0800, xihuang sun wrote:
> > >> > > > I installed orangefs2.8.7 on ten nodes and each of them acted
> as io
> > >> node
> > >> > > > and meta node. then pvfs2-client is running on mdc node and
> > >> node{1-10}.
> > >> > > > mount point is /mnt/orangefs.
> > >> > > > now when i am running mpi programs, they crashed and said
> > >> > > >
> > >> > > >
> > >> > > > File locking failed in ADIOI_Set_lock. If the file
> > >> > > > system is NFS, you need to use NFS version 3, ensure
> > >> > > > that the lockd daemon is running on all the machines,
> > >> > > > and mount the directory with the 'noac' option (no
> > >> > > > attribute caching).
> > >> > > >
> > >> > > > But the question is that I installed orangefs2.8.7 on local file
> > >> system
> > >> > > and
> > >> > > > I am sure about it , where the NFS thing come from?
> > >> > >
> > >> > > That's a ROMIO message.  ROMIO, an implementation of MPI-IO used
> > >> > > nearly everyhwere, can select file system specific functions, or
> fall
> > >> > > back to a general all-purpose file system.
> > >> > >
> > >> > > The logic for selecting which collection of "file system
> routines" to
> > >> > > use can be set two ways:
> > >> > >
> > >> > > - automatically: romio will stat the file or it's parent
> directory and
> > >> > >   use that information to choose from available file systems.
> > >> > >
> > >> > > - manually: one can prefix the file (the whole file path) with
> > >> > >   'pvfs2:' and ROMIO will use the PVFS2 routines.
> > >> > >
> > >> > > Nice thing about this approach is ROMIO uses the "system
> interface"
> > >> > > and will bypass the kernel.
> > >> > >
> > >> > > It may be the case that your MPI implementation was not built with
> > >> > > PVFS support, in which case ROMIO will try to use general-purpose
> unix
> > >> > > routines on PVFS.  As Kevin says, one of those is fcntl() to lock
> file
> > >> > > regions, which is not supported through the OrangeFS VFS
> interface.
> > >> > >
> > >> > > --
> > >> > > Rob Latham
> > >> > > Mathematics and Computer Science Division
> > >> > > Argonne National Lab, IL USA
> > >> > >
> > >>
> > >> > _______________________________________________
> > >> > Pvfs2-users mailing list
> > >> > [email protected]
> > >> > http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users
> > >>
> > >>
> > >> --
> > >> Rob Latham
> > >> Mathematics and Computer Science Division
> > >> Argonne National Lab, IL USA
> > >>
> > >
> > >
>
> --
> Rob Latham
> Mathematics and Computer Science Division
> Argonne National Lab, IL USA
>
_______________________________________________
Pvfs2-users mailing list
[email protected]
http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users

Reply via email to